This edition of the Parabola Community Kernel patch includes: * AUFS * BFQ I/O scheduler * BFS * TuxOnIce * UKSM * graysky's kernel GCC patch Version: 4.14.7-gnu URL: https://wiki.parabola.nu/PCK diff --git b/Documentation/ABI/testing/debugfs-aufs b/Documentation/ABI/testing/debugfs-aufs new file mode 100644 index 0000000..99642d1 --- /dev/null +++ b/Documentation/ABI/testing/debugfs-aufs @@ -0,0 +1,50 @@ +What: /debug/aufs/si_/ +Date: March 2009 +Contact: J. R. Okajima +Description: + Under /debug/aufs, a directory named si_ is created + per aufs mount, where is a unique id generated + internally. + +What: /debug/aufs/si_/plink +Date: Apr 2013 +Contact: J. R. Okajima +Description: + It has three lines and shows the information about the + pseudo-link. The first line is a single number + representing a number of buckets. The second line is a + number of pseudo-links per buckets (separated by a + blank). The last line is a single number representing a + total number of psedo-links. + When the aufs mount option 'noplink' is specified, it + will show "1\n0\n0\n". + +What: /debug/aufs/si_/xib +Date: March 2009 +Contact: J. R. Okajima +Description: + It shows the consumed blocks by xib (External Inode Number + Bitmap), its block size and file size. + When the aufs mount option 'noxino' is specified, it + will be empty. About XINO files, see the aufs manual. + +What: /debug/aufs/si_/xino0, xino1 ... xinoN +Date: March 2009 +Contact: J. R. Okajima +Description: + It shows the consumed blocks by xino (External Inode Number + Translation Table), its link count, block size and file + size. + When the aufs mount option 'noxino' is specified, it + will be empty. About XINO files, see the aufs manual. + +What: /debug/aufs/si_/xigen +Date: March 2009 +Contact: J. R. Okajima +Description: + It shows the consumed blocks by xigen (External Inode + Generation Table), its block size and file size. + If CONFIG_AUFS_EXPORT is disabled, this entry will not + be created. + When the aufs mount option 'noxino' is specified, it + will be empty. About XINO files, see the aufs manual. diff --git b/Documentation/ABI/testing/sysfs-aufs b/Documentation/ABI/testing/sysfs-aufs new file mode 100644 index 0000000..82f9518 --- /dev/null +++ b/Documentation/ABI/testing/sysfs-aufs @@ -0,0 +1,31 @@ +What: /sys/fs/aufs/si_/ +Date: March 2009 +Contact: J. R. Okajima +Description: + Under /sys/fs/aufs, a directory named si_ is created + per aufs mount, where is a unique id generated + internally. + +What: /sys/fs/aufs/si_/br0, br1 ... brN +Date: March 2009 +Contact: J. R. Okajima +Description: + It shows the abolute path of a member directory (which + is called branch) in aufs, and its permission. + +What: /sys/fs/aufs/si_/brid0, brid1 ... bridN +Date: July 2013 +Contact: J. R. Okajima +Description: + It shows the id of a member directory (which is called + branch) in aufs. + +What: /sys/fs/aufs/si_/xi_path +Date: March 2009 +Contact: J. R. Okajima +Description: + It shows the abolute path of XINO (External Inode Number + Bitmap, Translation Table and Generation Table) file + even if it is the default path. + When the aufs mount option 'noxino' is specified, it + will be empty. About XINO files, see the aufs manual. diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index 0549662..923957b 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4340,6 +4340,9 @@ HIGHMEM regardless of setting of CONFIG_HIGHPTE. + uuid_debug= (Boolean) whether to enable debugging of TuxOnIce's + uuid support. + vdso= [X86,SH] On X86_32, this is an alias for vdso32=. Otherwise: diff --git a/Documentation/block/bfq-iosched.txt b/Documentation/block/bfq-iosched.txt index 3d6951d..30ef2db 100644 --- a/Documentation/block/bfq-iosched.txt +++ b/Documentation/block/bfq-iosched.txt @@ -11,6 +11,15 @@ controllers), BFQ's main features are: groups (switching back to time distribution when needed to keep throughput high). +If bfq-mq patches have been applied, then the following three +instances of BFQ are available (otherwise only the first instance): +- bfq: mainline version of BFQ, for blk-mq +- bfq-mq: development version of BFQ for blk-mq; this version contains + also all latest features and fixes not yet landed in mainline, plus many + safety checks +- bfq-sq: BFQ for legacy blk; also this version contains latest features + and fixes, as well as safety checks + In its default configuration, BFQ privileges latency over throughput. So, when needed for achieving a lower latency, BFQ builds schedules that may lead to a lower throughput. If your main or only @@ -20,14 +29,44 @@ for that device, by setting low_latency to 0. See Section 3 for details on how to configure BFQ for the desired tradeoff between latency and throughput, or on how to maximize throughput. -On average CPUs, the current version of BFQ can handle devices -performing at most ~30K IOPS; at most ~50 KIOPS on faster CPUs. As a -reference, 30-50 KIOPS correspond to very high bandwidths with -sequential I/O (e.g., 8-12 GB/s if I/O requests are 256 KB large), and -to 120-200 MB/s with 4KB random I/O. BFQ is currently being tested on -multi-queue devices too. - -The table of contents follow. Impatients can just jump to Section 3. +BFQ has a non-null overhead, which limits the maximum IOPS that a CPU +can process for a device scheduled with BFQ. To give an idea of the +limits on slow or average CPUs, here are, first, the limits of bfq-mq +and bfq for three different CPUs, on, respectively, an average laptop, +an old desktop, and a cheap embedded system, in case full hierarchical +support is enabled (i.e., CONFIG_MQ_BFQ_GROUP_IOSCHED is set for +bfq-mq, or CONFIG_BFQ_GROUP_IOSCHED is set for bfq), but +CONFIG_DEBUG_BLK_CGROUP is not set (Section 4-2): +- Intel i7-4850HQ: 400 KIOPS +- AMD A8-3850: 250 KIOPS +- ARM CortexTM-A53 Octa-core: 80 KIOPS + +As for bfq-sq, it cannot reach the above IOPS, because of the +inherent, lower parallelism of legacy blk and of the components within +it (including bfq-sq itself). In particular, results with +CONFIG_DEBUG_BLK_CGROUP unset are rather fluctuating. The limits +reported below for the case CONFIG_DEBUG_BLK_CGROUP set will however +provide a lower bound to the limits of bfq-sq. + +Turning back to bfq-mq and bfq, If CONFIG_DEBUG_BLK_CGROUP is set (and +of course full hierarchical support is enabled), then the sustainable +throughput with bfq-mq and bfq decreases, because all blkio.bfq* +statistics are created and updated (Section 4-2). For bfq-mq and bfq, +this leads to the following maximum sustainable throughputs, on the +same systems as above: +- Intel i7-4850HQ: 310 KIOPS +- AMD A8-3850: 200 KIOPS +- ARM CortexTM-A53 Octa-core: 56 KIOPS + +Finally, if CONFIG_DEBUG_BLK_CGROUP is set (and full hierarchical +support is enabled), then bfq-sq exhibits the following limits: +- Intel i7-4850HQ: 250 KIOPS +- AMD A8-3850: 170 KIOPS +- ARM CortexTM-A53 Octa-core: 45 KIOPS + +BFQ works for multi-queue devices too (bfq and bfq-mq instances). + +The table of contents follows. Impatients can just jump to Section 3. CONTENTS @@ -494,18 +533,36 @@ To get proportional sharing of bandwidth with BFQ for a given device, BFQ must of course be the active scheduler for that device. Within each group directory, the names of the files associated with -BFQ-specific cgroup parameters and stats begin with the "bfq." -prefix. So, with cgroups-v1 or cgroups-v2, the full prefix for -BFQ-specific files is "blkio.bfq." or "io.bfq." For example, the group -parameter to set the weight of a group with BFQ is blkio.bfq.weight +BFQ-specific cgroup parameters and stats begin with the "bfq.", +"bfq-sq." or "bfq-mq." prefix, depending on which instance of bfq you +want to use. So, with cgroups-v1 or cgroups-v2, the full prefix for +BFQ-specific files is "blkio.bfqX." or "io.bfqX.", where X can be "" +(i.e., null string), "-sq" or "-mq". For example, the group parameter +to set the weight of a group with the mainline BFQ is blkio.bfq.weight or io.bfq.weight. +As for cgroups-v1 (blkio controller), the exact set of stat files +created, and kept up-to-date by bfq*, depends on whether +CONFIG_DEBUG_BLK_CGROUP is set. If it is set, then bfq* creates all +the stat files documented in +Documentation/cgroup-v1/blkio-controller.txt. If, instead, +CONFIG_DEBUG_BLK_CGROUP is not set, then bfq* creates only the files +blkio.bfq*.io_service_bytes +blkio.bfq*.io_service_bytes_recursive +blkio.bfq*.io_serviced +blkio.bfq*.io_serviced_recursive + +The value of CONFIG_DEBUG_BLK_CGROUP greatly influences the maximum +throughput sustainable with bfq*, because updating the blkio.bfq* +stats is rather costly, especially for some of the stats enabled by +CONFIG_DEBUG_BLK_CGROUP. + Parameters to set ----------------- For each group, there is only the following parameter to set. -weight (namely blkio.bfq.weight or io.bfq-weight): the weight of the +weight (namely blkio.bfqX.weight or io.bfqX.weight): the weight of the group inside its parent. Available values: 1..10000 (default 100). The linear mapping between ioprio and weights, described at the beginning of the tunable section, is still valid, but all weights higher than diff --git b/Documentation/filesystems/aufs/README b/Documentation/filesystems/aufs/README new file mode 100644 index 0000000..fa82b63 --- /dev/null +++ b/Documentation/filesystems/aufs/README @@ -0,0 +1,393 @@ + +Aufs4 -- advanced multi layered unification filesystem version 4.x +http://aufs.sf.net +Junjiro R. Okajima + + +0. Introduction +---------------------------------------- +In the early days, aufs was entirely re-designed and re-implemented +Unionfs Version 1.x series. Adding many original ideas, approaches, +improvements and implementations, it becomes totally different from +Unionfs while keeping the basic features. +Recently, Unionfs Version 2.x series begin taking some of the same +approaches to aufs1's. +Unionfs is being developed by Professor Erez Zadok at Stony Brook +University and his team. + +Aufs4 supports linux-4.0 and later, and for linux-3.x series try aufs3. +If you want older kernel version support, try aufs2-2.6.git or +aufs2-standalone.git repository, aufs1 from CVS on SourceForge. + +Note: it becomes clear that "Aufs was rejected. Let's give it up." + According to Christoph Hellwig, linux rejects all union-type + filesystems but UnionMount. + + +PS. Al Viro seems have a plan to merge aufs as well as overlayfs and + UnionMount, and he pointed out an issue around a directory mutex + lock and aufs addressed it. But it is still unsure whether aufs will + be merged (or any other union solution). + + + +1. Features +---------------------------------------- +- unite several directories into a single virtual filesystem. The member + directory is called as a branch. +- you can specify the permission flags to the branch, which are 'readonly', + 'readwrite' and 'whiteout-able.' +- by upper writable branch, internal copyup and whiteout, files/dirs on + readonly branch are modifiable logically. +- dynamic branch manipulation, add, del. +- etc... + +Also there are many enhancements in aufs, such as: +- test only the highest one for the directory permission (dirperm1) +- copyup on open (coo=) +- 'move' policy for copy-up between two writable branches, after + checking free space. +- xattr, acl +- readdir(3) in userspace. +- keep inode number by external inode number table +- keep the timestamps of file/dir in internal copyup operation +- seekable directory, supporting NFS readdir. +- whiteout is hardlinked in order to reduce the consumption of inodes + on branch +- do not copyup, nor create a whiteout when it is unnecessary +- revert a single systemcall when an error occurs in aufs +- remount interface instead of ioctl +- maintain /etc/mtab by an external command, /sbin/mount.aufs. +- loopback mounted filesystem as a branch +- kernel thread for removing the dir who has a plenty of whiteouts +- support copyup sparse file (a file which has a 'hole' in it) +- default permission flags for branches +- selectable permission flags for ro branch, whether whiteout can + exist or not +- export via NFS. +- support /fs/aufs and /aufs. +- support multiple writable branches, some policies to select one + among multiple writable branches. +- a new semantics for link(2) and rename(2) to support multiple + writable branches. +- no glibc changes are required. +- pseudo hardlink (hardlink over branches) +- allow a direct access manually to a file on branch, e.g. bypassing aufs. + including NFS or remote filesystem branch. +- userspace wrapper for pathconf(3)/fpathconf(3) with _PC_LINK_MAX. +- and more... + +Currently these features are dropped temporary from aufs4. +See design/08plan.txt in detail. +- nested mount, i.e. aufs as readonly no-whiteout branch of another aufs + (robr) +- statistics of aufs thread (/sys/fs/aufs/stat) + +Features or just an idea in the future (see also design/*.txt), +- reorder the branch index without del/re-add. +- permanent xino files for NFSD +- an option for refreshing the opened files after add/del branches +- light version, without branch manipulation. (unnecessary?) +- copyup in userspace +- inotify in userspace +- readv/writev + + +2. Download +---------------------------------------- +There are three GIT trees for aufs4, aufs4-linux.git, +aufs4-standalone.git, and aufs-util.git. Note that there is no "4" in +"aufs-util.git." +While the aufs-util is always necessary, you need either of aufs4-linux +or aufs4-standalone. + +The aufs4-linux tree includes the whole linux mainline GIT tree, +git://git.kernel.org/.../torvalds/linux.git. +And you cannot select CONFIG_AUFS_FS=m for this version, eg. you cannot +build aufs4 as an external kernel module. +Several extra patches are not included in this tree. Only +aufs4-standalone tree contains them. They are described in the later +section "Configuration and Compilation." + +On the other hand, the aufs4-standalone tree has only aufs source files +and necessary patches, and you can select CONFIG_AUFS_FS=m. +But you need to apply all aufs patches manually. + +You will find GIT branches whose name is in form of "aufs4.x" where "x" +represents the linux kernel version, "linux-4.x". For instance, +"aufs4.0" is for linux-4.0. For latest "linux-4.x-rcN", use +"aufs4.x-rcN" branch. + +o aufs4-linux tree +$ git clone --reference /your/linux/git/tree \ + git://github.com/sfjro/aufs4-linux.git aufs4-linux.git +- if you don't have linux GIT tree, then remove "--reference ..." +$ cd aufs4-linux.git +$ git checkout origin/aufs4.0 + +Or You may want to directly git-pull aufs into your linux GIT tree, and +leave the patch-work to GIT. +$ cd /your/linux/git/tree +$ git remote add aufs4 git://github.com/sfjro/aufs4-linux.git +$ git fetch aufs4 +$ git checkout -b my4.0 v4.0 +$ (add your local change...) +$ git pull aufs4 aufs4.0 +- now you have v4.0 + your_changes + aufs4.0 in you my4.0 branch. +- you may need to solve some conflicts between your_changes and + aufs4.0. in this case, git-rerere is recommended so that you can + solve the similar conflicts automatically when you upgrade to 4.1 or + later in the future. + +o aufs4-standalone tree +$ git clone git://github.com/sfjro/aufs4-standalone.git aufs4-standalone.git +$ cd aufs4-standalone.git +$ git checkout origin/aufs4.0 + +o aufs-util tree +$ git clone git://git.code.sf.net/p/aufs/aufs-util aufs-util.git +- note that the public aufs-util.git is on SourceForge instead of + GitHUB. +$ cd aufs-util.git +$ git checkout origin/aufs4.0 + +Note: The 4.x-rcN branch is to be used with `rc' kernel versions ONLY. +The minor version number, 'x' in '4.x', of aufs may not always +follow the minor version number of the kernel. +Because changes in the kernel that cause the use of a new +minor version number do not always require changes to aufs-util. + +Since aufs-util has its own minor version number, you may not be +able to find a GIT branch in aufs-util for your kernel's +exact minor version number. +In this case, you should git-checkout the branch for the +nearest lower number. + +For (an unreleased) example: +If you are using "linux-4.10" and the "aufs4.10" branch +does not exist in aufs-util repository, then "aufs4.9", "aufs4.8" +or something numerically smaller is the branch for your kernel. + +Also you can view all branches by + $ git branch -a + + +3. Configuration and Compilation +---------------------------------------- +Make sure you have git-checkout'ed the correct branch. + +For aufs4-linux tree, +- enable CONFIG_AUFS_FS. +- set other aufs configurations if necessary. + +For aufs4-standalone tree, +There are several ways to build. + +1. +- apply ./aufs4-kbuild.patch to your kernel source files. +- apply ./aufs4-base.patch too. +- apply ./aufs4-mmap.patch too. +- apply ./aufs4-standalone.patch too, if you have a plan to set + CONFIG_AUFS_FS=m. otherwise you don't need ./aufs4-standalone.patch. +- copy ./{Documentation,fs,include/uapi/linux/aufs_type.h} files to your + kernel source tree. Never copy $PWD/include/uapi/linux/Kbuild. +- enable CONFIG_AUFS_FS, you can select either + =m or =y. +- and build your kernel as usual. +- install the built kernel. + Note: Since linux-3.9, every filesystem module requires an alias + "fs-". You should make sure that "fs-aufs" is listed in your + modules.aliases file if you set CONFIG_AUFS_FS=m. +- install the header files too by "make headers_install" to the + directory where you specify. By default, it is $PWD/usr. + "make help" shows a brief note for headers_install. +- and reboot your system. + +2. +- module only (CONFIG_AUFS_FS=m). +- apply ./aufs4-base.patch to your kernel source files. +- apply ./aufs4-mmap.patch too. +- apply ./aufs4-standalone.patch too. +- build your kernel, don't forget "make headers_install", and reboot. +- edit ./config.mk and set other aufs configurations if necessary. + Note: You should read $PWD/fs/aufs/Kconfig carefully which describes + every aufs configurations. +- build the module by simple "make". + Note: Since linux-3.9, every filesystem module requires an alias + "fs-". You should make sure that "fs-aufs" is listed in your + modules.aliases file. +- you can specify ${KDIR} make variable which points to your kernel + source tree. +- install the files + + run "make install" to install the aufs module, or copy the built + $PWD/aufs.ko to /lib/modules/... and run depmod -a (or reboot simply). + + run "make install_headers" (instead of headers_install) to install + the modified aufs header file (you can specify DESTDIR which is + available in aufs standalone version's Makefile only), or copy + $PWD/usr/include/linux/aufs_type.h to /usr/include/linux or wherever + you like manually. By default, the target directory is $PWD/usr. +- no need to apply aufs4-kbuild.patch, nor copying source files to your + kernel source tree. + +Note: The header file aufs_type.h is necessary to build aufs-util + as well as "make headers_install" in the kernel source tree. + headers_install is subject to be forgotten, but it is essentially + necessary, not only for building aufs-util. + You may not meet problems without headers_install in some older + version though. + +And then, +- read README in aufs-util, build and install it +- note that your distribution may contain an obsoleted version of + aufs_type.h in /usr/include/linux or something. When you build aufs + utilities, make sure that your compiler refers the correct aufs header + file which is built by "make headers_install." +- if you want to use readdir(3) in userspace or pathconf(3) wrapper, + then run "make install_ulib" too. And refer to the aufs manual in + detail. + +There several other patches in aufs4-standalone.git. They are all +optional. When you meet some problems, they will help you. +- aufs4-loopback.patch + Supports a nested loopback mount in a branch-fs. This patch is + unnecessary until aufs produces a message like "you may want to try + another patch for loopback file". +- vfs-ino.patch + Modifies a system global kernel internal function get_next_ino() in + order to stop assigning 0 for an inode-number. Not directly related to + aufs, but recommended generally. +- tmpfs-idr.patch + Keeps the tmpfs inode number as the lowest value. Effective to reduce + the size of aufs XINO files for tmpfs branch. Also it prevents the + duplication of inode number, which is important for backup tools and + other utilities. When you find aufs XINO files for tmpfs branch + growing too much, try this patch. +- lockdep-debug.patch + Because aufs is not only an ordinary filesystem (callee of VFS), but + also a caller of VFS functions for branch filesystems, subclassing of + the internal locks for LOCKDEP is necessary. LOCKDEP is a debugging + feature of linux kernel. If you enable CONFIG_LOCKDEP, then you will + need to apply this debug patch to expand several constant values. + If don't know what LOCKDEP, then you don't have apply this patch. + + +4. Usage +---------------------------------------- +At first, make sure aufs-util are installed, and please read the aufs +manual, aufs.5 in aufs-util.git tree. +$ man -l aufs.5 + +And then, +$ mkdir /tmp/rw /tmp/aufs +# mount -t aufs -o br=/tmp/rw:${HOME} none /tmp/aufs + +Here is another example. The result is equivalent. +# mount -t aufs -o br=/tmp/rw=rw:${HOME}=ro none /tmp/aufs + Or +# mount -t aufs -o br:/tmp/rw none /tmp/aufs +# mount -o remount,append:${HOME} /tmp/aufs + +Then, you can see whole tree of your home dir through /tmp/aufs. If +you modify a file under /tmp/aufs, the one on your home directory is +not affected, instead the same named file will be newly created under +/tmp/rw. And all of your modification to a file will be applied to +the one under /tmp/rw. This is called the file based Copy on Write +(COW) method. +Aufs mount options are described in aufs.5. +If you run chroot or something and make your aufs as a root directory, +then you need to customize the shutdown script. See the aufs manual in +detail. + +Additionally, there are some sample usages of aufs which are a +diskless system with network booting, and LiveCD over NFS. +See sample dir in CVS tree on SourceForge. + + +5. Contact +---------------------------------------- +When you have any problems or strange behaviour in aufs, please let me +know with: +- /proc/mounts (instead of the output of mount(8)) +- /sys/module/aufs/* +- /sys/fs/aufs/* (if you have them) +- /debug/aufs/* (if you have them) +- linux kernel version + if your kernel is not plain, for example modified by distributor, + the url where i can download its source is necessary too. +- aufs version which was printed at loading the module or booting the + system, instead of the date you downloaded. +- configuration (define/undefine CONFIG_AUFS_xxx) +- kernel configuration or /proc/config.gz (if you have it) +- behaviour which you think to be incorrect +- actual operation, reproducible one is better +- mailto: aufs-users at lists.sourceforge.net + +Usually, I don't watch the Public Areas(Bugs, Support Requests, Patches, +and Feature Requests) on SourceForge. Please join and write to +aufs-users ML. + + +6. Acknowledgements +---------------------------------------- +Thanks to everyone who have tried and are using aufs, whoever +have reported a bug or any feedback. + +Especially donators: +Tomas Matejicek(slax.org) made a donation (much more than once). + Since Apr 2010, Tomas M (the author of Slax and Linux Live + scripts) is making "doubling" donations. + Unfortunately I cannot list all of the donators, but I really + appreciate. + It ends Aug 2010, but the ordinary donation URL is still available. + +Dai Itasaka made a donation (2007/8). +Chuck Smith made a donation (2008/4, 10 and 12). +Henk Schoneveld made a donation (2008/9). +Chih-Wei Huang, ASUS, CTC donated Eee PC 4G (2008/10). +Francois Dupoux made a donation (2008/11). +Bruno Cesar Ribas and Luis Carlos Erpen de Bona, C3SL serves public + aufs2 GIT tree (2009/2). +William Grant made a donation (2009/3). +Patrick Lane made a donation (2009/4). +The Mail Archive (mail-archive.com) made donations (2009/5). +Nippy Networks (Ed Wildgoose) made a donation (2009/7). +New Dream Network, LLC (www.dreamhost.com) made a donation (2009/11). +Pavel Pronskiy made a donation (2011/2). +Iridium and Inmarsat satellite phone retailer (www.mailasail.com), Nippy + Networks (Ed Wildgoose) made a donation for hardware (2011/3). +Max Lekomcev (DOM-TV project) made a donation (2011/7, 12, 2012/3, 6 and +11). +Sam Liddicott made a donation (2011/9). +Era Scarecrow made a donation (2013/4). +Bor Ratajc made a donation (2013/4). +Alessandro Gorreta made a donation (2013/4). +POIRETTE Marc made a donation (2013/4). +Alessandro Gorreta made a donation (2013/4). +lauri kasvandik made a donation (2013/5). +"pemasu from Finland" made a donation (2013/7). +The Parted Magic Project made a donation (2013/9 and 11). +Pavel Barta made a donation (2013/10). +Nikolay Pertsev made a donation (2014/5). +James B made a donation (2014/7 and 2015/7). +Stefano Di Biase made a donation (2014/8). +Daniel Epellei made a donation (2015/1). +OmegaPhil made a donation (2016/1). +Tomasz Szewczyk made a donation (2016/4). +James Burry made a donation (2016/12). + +Thank you very much. +Donations are always, including future donations, very important and +helpful for me to keep on developing aufs. + + +7. +---------------------------------------- +If you are an experienced user, no explanation is needed. Aufs is +just a linux filesystem. + + +Enjoy! + +# Local variables: ; +# mode: text; +# End: ; diff --git b/Documentation/filesystems/aufs/design/01intro.txt b/Documentation/filesystems/aufs/design/01intro.txt new file mode 100644 index 0000000..ae16191 --- /dev/null +++ b/Documentation/filesystems/aufs/design/01intro.txt @@ -0,0 +1,171 @@ + +# Copyright (C) 2005-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +Introduction +---------------------------------------- + +aufs [ei ju: ef es] | /ey-yoo-ef-es/ | [a u f s] +1. abbrev. for "advanced multi-layered unification filesystem". +2. abbrev. for "another unionfs". +3. abbrev. for "auf das" in German which means "on the" in English. + Ex. "Butter aufs Brot"(G) means "butter onto bread"(E). + But "Filesystem aufs Filesystem" is hard to understand. +4. abbrev. for "African Urban Fashion Show". + +AUFS is a filesystem with features: +- multi layered stackable unification filesystem, the member directory + is called as a branch. +- branch permission and attribute, 'readonly', 'real-readonly', + 'readwrite', 'whiteout-able', 'link-able whiteout', etc. and their + combination. +- internal "file copy-on-write". +- logical deletion, whiteout. +- dynamic branch manipulation, adding, deleting and changing permission. +- allow bypassing aufs, user's direct branch access. +- external inode number translation table and bitmap which maintains the + persistent aufs inode number. +- seekable directory, including NFS readdir. +- file mapping, mmap and sharing pages. +- pseudo-link, hardlink over branches. +- loopback mounted filesystem as a branch. +- several policies to select one among multiple writable branches. +- revert a single systemcall when an error occurs in aufs. +- and more... + + +Multi Layered Stackable Unification Filesystem +---------------------------------------------------------------------- +Most people already knows what it is. +It is a filesystem which unifies several directories and provides a +merged single directory. When users access a file, the access will be +passed/re-directed/converted (sorry, I am not sure which English word is +correct) to the real file on the member filesystem. The member +filesystem is called 'lower filesystem' or 'branch' and has a mode +'readonly' and 'readwrite.' And the deletion for a file on the lower +readonly branch is handled by creating 'whiteout' on the upper writable +branch. + +On LKML, there have been discussions about UnionMount (Jan Blunck, +Bharata B Rao and Valerie Aurora) and Unionfs (Erez Zadok). They took +different approaches to implement the merged-view. +The former tries putting it into VFS, and the latter implements as a +separate filesystem. +(If I misunderstand about these implementations, please let me know and +I shall correct it. Because it is a long time ago when I read their +source files last time). + +UnionMount's approach will be able to small, but may be hard to share +branches between several UnionMount since the whiteout in it is +implemented in the inode on branch filesystem and always +shared. According to Bharata's post, readdir does not seems to be +finished yet. +There are several missing features known in this implementations such as +- for users, the inode number may change silently. eg. copy-up. +- link(2) may break by copy-up. +- read(2) may get an obsoleted filedata (fstat(2) too). +- fcntl(F_SETLK) may be broken by copy-up. +- unnecessary copy-up may happen, for example mmap(MAP_PRIVATE) after + open(O_RDWR). + +In linux-3.18, "overlay" filesystem (formerly known as "overlayfs") was +merged into mainline. This is another implementation of UnionMount as a +separated filesystem. All the limitations and known problems which +UnionMount are equally inherited to "overlay" filesystem. + +Unionfs has a longer history. When I started implementing a stackable +filesystem (Aug 2005), it already existed. It has virtual super_block, +inode, dentry and file objects and they have an array pointing lower +same kind objects. After contributing many patches for Unionfs, I +re-started my project AUFS (Jun 2006). + +In AUFS, the structure of filesystem resembles to Unionfs, but I +implemented my own ideas, approaches and enhancements and it became +totally different one. + +Comparing DM snapshot and fs based implementation +- the number of bytes to be copied between devices is much smaller. +- the type of filesystem must be one and only. +- the fs must be writable, no readonly fs, even for the lower original + device. so the compression fs will not be usable. but if we use + loopback mount, we may address this issue. + for instance, + mount /cdrom/squashfs.img /sq + losetup /sq/ext2.img + losetup /somewhere/cow + dmsetup "snapshot /dev/loop0 /dev/loop1 ..." +- it will be difficult (or needs more operations) to extract the + difference between the original device and COW. +- DM snapshot-merge may help a lot when users try merging. in the + fs-layer union, users will use rsync(1). + +You may want to read my old paper "Filesystems in LiveCD" +(http://aufs.sourceforge.net/aufs2/report/sq/sq.pdf). + + +Several characters/aspects/persona of aufs +---------------------------------------------------------------------- + +Aufs has several characters, aspects or persona. +1. a filesystem, callee of VFS helper +2. sub-VFS, caller of VFS helper for branches +3. a virtual filesystem which maintains persistent inode number +4. reader/writer of files on branches such like an application + +1. Callee of VFS Helper +As an ordinary linux filesystem, aufs is a callee of VFS. For instance, +unlink(2) from an application reaches sys_unlink() kernel function and +then vfs_unlink() is called. vfs_unlink() is one of VFS helper and it +calls filesystem specific unlink operation. Actually aufs implements the +unlink operation but it behaves like a redirector. + +2. Caller of VFS Helper for Branches +aufs_unlink() passes the unlink request to the branch filesystem as if +it were called from VFS. So the called unlink operation of the branch +filesystem acts as usual. As a caller of VFS helper, aufs should handle +every necessary pre/post operation for the branch filesystem. +- acquire the lock for the parent dir on a branch +- lookup in a branch +- revalidate dentry on a branch +- mnt_want_write() for a branch +- vfs_unlink() for a branch +- mnt_drop_write() for a branch +- release the lock on a branch + +3. Persistent Inode Number +One of the most important issue for a filesystem is to maintain inode +numbers. This is particularly important to support exporting a +filesystem via NFS. Aufs is a virtual filesystem which doesn't have a +backend block device for its own. But some storage is necessary to +keep and maintain the inode numbers. It may be a large space and may not +suit to keep in memory. Aufs rents some space from its first writable +branch filesystem (by default) and creates file(s) on it. These files +are created by aufs internally and removed soon (currently) keeping +opened. +Note: Because these files are removed, they are totally gone after + unmounting aufs. It means the inode numbers are not persistent + across unmount or reboot. I have a plan to make them really + persistent which will be important for aufs on NFS server. + +4. Read/Write Files Internally (copy-on-write) +Because a branch can be readonly, when you write a file on it, aufs will +"copy-up" it to the upper writable branch internally. And then write the +originally requested thing to the file. Generally kernel doesn't +open/read/write file actively. In aufs, even a single write may cause a +internal "file copy". This behaviour is very similar to cp(1) command. + +Some people may think it is better to pass such work to user space +helper, instead of doing in kernel space. Actually I am still thinking +about it. But currently I have implemented it in kernel space. diff --git b/Documentation/filesystems/aufs/design/02struct.txt b/Documentation/filesystems/aufs/design/02struct.txt new file mode 100644 index 0000000..1d1ccde --- /dev/null +++ b/Documentation/filesystems/aufs/design/02struct.txt @@ -0,0 +1,258 @@ + +# Copyright (C) 2005-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +Basic Aufs Internal Structure + +Superblock/Inode/Dentry/File Objects +---------------------------------------------------------------------- +As like an ordinary filesystem, aufs has its own +superblock/inode/dentry/file objects. All these objects have a +dynamically allocated array and store the same kind of pointers to the +lower filesystem, branch. +For example, when you build a union with one readwrite branch and one +readonly, mounted /au, /rw and /ro respectively. +- /au = /rw + /ro +- /ro/fileA exists but /rw/fileA + +Aufs lookup operation finds /ro/fileA and gets dentry for that. These +pointers are stored in a aufs dentry. The array in aufs dentry will be, +- [0] = NULL (because /rw/fileA doesn't exist) +- [1] = /ro/fileA + +This style of an array is essentially same to the aufs +superblock/inode/dentry/file objects. + +Because aufs supports manipulating branches, ie. add/delete/change +branches dynamically, these objects has its own generation. When +branches are changed, the generation in aufs superblock is +incremented. And a generation in other object are compared when it is +accessed. When a generation in other objects are obsoleted, aufs +refreshes the internal array. + + +Superblock +---------------------------------------------------------------------- +Additionally aufs superblock has some data for policies to select one +among multiple writable branches, XIB files, pseudo-links and kobject. +See below in detail. +About the policies which supports copy-down a directory, see +wbr_policy.txt too. + + +Branch and XINO(External Inode Number Translation Table) +---------------------------------------------------------------------- +Every branch has its own xino (external inode number translation table) +file. The xino file is created and unlinked by aufs internally. When two +members of a union exist on the same filesystem, they share the single +xino file. +The struct of a xino file is simple, just a sequence of aufs inode +numbers which is indexed by the lower inode number. +In the above sample, assume the inode number of /ro/fileA is i111 and +aufs assigns the inode number i999 for fileA. Then aufs writes 999 as +4(8) bytes at 111 * 4(8) bytes offset in the xino file. + +When the inode numbers are not contiguous, the xino file will be sparse +which has a hole in it and doesn't consume as much disk space as it +might appear. If your branch filesystem consumes disk space for such +holes, then you should specify 'xino=' option at mounting aufs. + +Aufs has a mount option to free the disk blocks for such holes in XINO +files on tmpfs or ramdisk. But it is not so effective actually. If you +meet a problem of disk shortage due to XINO files, then you should try +"tmpfs-ino.patch" (and "vfs-ino.patch" too) in aufs4-standalone.git. +The patch localizes the assignment inumbers per tmpfs-mount and avoid +the holes in XINO files. + +Also a writable branch has three kinds of "whiteout bases". All these +are existed when the branch is joined to aufs, and their names are +whiteout-ed doubly, so that users will never see their names in aufs +hierarchy. +1. a regular file which will be hardlinked to all whiteouts. +2. a directory to store a pseudo-link. +3. a directory to store an "orphan"-ed file temporary. + +1. Whiteout Base + When you remove a file on a readonly branch, aufs handles it as a + logical deletion and creates a whiteout on the upper writable branch + as a hardlink of this file in order not to consume inode on the + writable branch. +2. Pseudo-link Dir + See below, Pseudo-link. +3. Step-Parent Dir + When "fileC" exists on the lower readonly branch only and it is + opened and removed with its parent dir, and then user writes + something into it, then aufs copies-up fileC to this + directory. Because there is no other dir to store fileC. After + creating a file under this dir, the file is unlinked. + +Because aufs supports manipulating branches, ie. add/delete/change +dynamically, a branch has its own id. When the branch order changes, +aufs finds the new index by searching the branch id. + + +Pseudo-link +---------------------------------------------------------------------- +Assume "fileA" exists on the lower readonly branch only and it is +hardlinked to "fileB" on the branch. When you write something to fileA, +aufs copies-up it to the upper writable branch. Additionally aufs +creates a hardlink under the Pseudo-link Directory of the writable +branch. The inode of a pseudo-link is kept in aufs super_block as a +simple list. If fileB is read after unlinking fileA, aufs returns +filedata from the pseudo-link instead of the lower readonly +branch. Because the pseudo-link is based upon the inode, to keep the +inode number by xino (see above) is essentially necessary. + +All the hardlinks under the Pseudo-link Directory of the writable branch +should be restored in a proper location later. Aufs provides a utility +to do this. The userspace helpers executed at remounting and unmounting +aufs by default. +During this utility is running, it puts aufs into the pseudo-link +maintenance mode. In this mode, only the process which began the +maintenance mode (and its child processes) is allowed to operate in +aufs. Some other processes which are not related to the pseudo-link will +be allowed to run too, but the rest have to return an error or wait +until the maintenance mode ends. If a process already acquires an inode +mutex (in VFS), it has to return an error. + + +XIB(external inode number bitmap) +---------------------------------------------------------------------- +Addition to the xino file per a branch, aufs has an external inode number +bitmap in a superblock object. It is also an internal file such like a +xino file. +It is a simple bitmap to mark whether the aufs inode number is in-use or +not. +To reduce the file I/O, aufs prepares a single memory page to cache xib. + +As well as XINO files, aufs has a feature to truncate/refresh XIB to +reduce the number of consumed disk blocks for these files. + + +Virtual or Vertical Dir, and Readdir in Userspace +---------------------------------------------------------------------- +In order to support multiple layers (branches), aufs readdir operation +constructs a virtual dir block on memory. For readdir, aufs calls +vfs_readdir() internally for each dir on branches, merges their entries +with eliminating the whiteout-ed ones, and sets it to file (dir) +object. So the file object has its entry list until it is closed. The +entry list will be updated when the file position is zero and becomes +obsoleted. This decision is made in aufs automatically. + +The dynamically allocated memory block for the name of entries has a +unit of 512 bytes (by default) and stores the names contiguously (no +padding). Another block for each entry is handled by kmem_cache too. +During building dir blocks, aufs creates hash list and judging whether +the entry is whiteouted by its upper branch or already listed. +The merged result is cached in the corresponding inode object and +maintained by a customizable life-time option. + +Some people may call it can be a security hole or invite DoS attack +since the opened and once readdir-ed dir (file object) holds its entry +list and becomes a pressure for system memory. But I'd say it is similar +to files under /proc or /sys. The virtual files in them also holds a +memory page (generally) while they are opened. When an idea to reduce +memory for them is introduced, it will be applied to aufs too. +For those who really hate this situation, I've developed readdir(3) +library which operates this merging in userspace. You just need to set +LD_PRELOAD environment variable, and aufs will not consume no memory in +kernel space for readdir(3). + + +Workqueue +---------------------------------------------------------------------- +Aufs sometimes requires privilege access to a branch. For instance, +in copy-up/down operation. When a user process is going to make changes +to a file which exists in the lower readonly branch only, and the mode +of one of ancestor directories may not be writable by a user +process. Here aufs copy-up the file with its ancestors and they may +require privilege to set its owner/group/mode/etc. +This is a typical case of a application character of aufs (see +Introduction). + +Aufs uses workqueue synchronously for this case. It creates its own +workqueue. The workqueue is a kernel thread and has privilege. Aufs +passes the request to call mkdir or write (for example), and wait for +its completion. This approach solves a problem of a signal handler +simply. +If aufs didn't adopt the workqueue and changed the privilege of the +process, then the process may receive the unexpected SIGXFSZ or other +signals. + +Also aufs uses the system global workqueue ("events" kernel thread) too +for asynchronous tasks, such like handling inotify/fsnotify, re-creating a +whiteout base and etc. This is unrelated to a privilege. +Most of aufs operation tries acquiring a rw_semaphore for aufs +superblock at the beginning, at the same time waits for the completion +of all queued asynchronous tasks. + + +Whiteout +---------------------------------------------------------------------- +The whiteout in aufs is very similar to Unionfs's. That is represented +by its filename. UnionMount takes an approach of a file mode, but I am +afraid several utilities (find(1) or something) will have to support it. + +Basically the whiteout represents "logical deletion" which stops aufs to +lookup further, but also it represents "dir is opaque" which also stop +further lookup. + +In aufs, rmdir(2) and rename(2) for dir uses whiteout alternatively. +In order to make several functions in a single systemcall to be +revertible, aufs adopts an approach to rename a directory to a temporary +unique whiteouted name. +For example, in rename(2) dir where the target dir already existed, aufs +renames the target dir to a temporary unique whiteouted name before the +actual rename on a branch, and then handles other actions (make it opaque, +update the attributes, etc). If an error happens in these actions, aufs +simply renames the whiteouted name back and returns an error. If all are +succeeded, aufs registers a function to remove the whiteouted unique +temporary name completely and asynchronously to the system global +workqueue. + + +Copy-up +---------------------------------------------------------------------- +It is a well-known feature or concept. +When user modifies a file on a readonly branch, aufs operate "copy-up" +internally and makes change to the new file on the upper writable branch. +When the trigger systemcall does not update the timestamps of the parent +dir, aufs reverts it after copy-up. + + +Move-down (aufs3.9 and later) +---------------------------------------------------------------------- +"Copy-up" is one of the essential feature in aufs. It copies a file from +the lower readonly branch to the upper writable branch when a user +changes something about the file. +"Move-down" is an opposite action of copy-up. Basically this action is +ran manually instead of automatically and internally. +For desgin and implementation, aufs has to consider these issues. +- whiteout for the file may exist on the lower branch. +- ancestor directories may not exist on the lower branch. +- diropq for the ancestor directories may exist on the upper branch. +- free space on the lower branch will reduce. +- another access to the file may happen during moving-down, including + UDBA (see "Revalidate Dentry and UDBA"). +- the file should not be hard-linked nor pseudo-linked. they should be + handled by auplink utility later. + +Sometimes users want to move-down a file from the upper writable branch +to the lower readonly or writable branch. For instance, +- the free space of the upper writable branch is going to run out. +- create a new intermediate branch between the upper and lower branch. +- etc. + +For this purpose, use "aumvdown" command in aufs-util.git. diff --git b/Documentation/filesystems/aufs/design/03atomic_open.txt b/Documentation/filesystems/aufs/design/03atomic_open.txt new file mode 100644 index 0000000..5f0aca4 --- /dev/null +++ b/Documentation/filesystems/aufs/design/03atomic_open.txt @@ -0,0 +1,85 @@ + +# Copyright (C) 2015-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +Support for a branch who has its ->atomic_open() +---------------------------------------------------------------------- +The filesystems who implement its ->atomic_open() are not majority. For +example NFSv4 does, and aufs should call NFSv4 ->atomic_open, +particularly for open(O_CREAT|O_EXCL, 0400) case. Other than +->atomic_open(), NFSv4 returns an error for this open(2). While I am not +sure whether all filesystems who have ->atomic_open() behave like this, +but NFSv4 surely returns the error. + +In order to support ->atomic_open() for aufs, there are a few +approaches. + +A. Introduce aufs_atomic_open() + - calls one of VFS:do_last(), lookup_open() or atomic_open() for + branch fs. +B. Introduce aufs_atomic_open() calling create, open and chmod. this is + an aufs user Pip Cet's approach + - calls aufs_create(), VFS finish_open() and notify_change(). + - pass fake-mode to finish_open(), and then correct the mode by + notify_change(). +C. Extend aufs_open() to call branch fs's ->atomic_open() + - no aufs_atomic_open(). + - aufs_lookup() registers the TID to an aufs internal object. + - aufs_create() does nothing when the matching TID is registered, but + registers the mode. + - aufs_open() calls branch fs's ->atomic_open() when the matching + TID is registered. +D. Extend aufs_open() to re-try branch fs's ->open() with superuser's + credential + - no aufs_atomic_open(). + - aufs_create() registers the TID to an internal object. this info + represents "this process created this file just now." + - when aufs gets EACCES from branch fs's ->open(), then confirm the + registered TID and re-try open() with superuser's credential. + +Pros and cons for each approach. + +A. + - straightforward but highly depends upon VFS internal. + - the atomic behavaiour is kept. + - some of parameters such as nameidata are hard to reproduce for + branch fs. + - large overhead. +B. + - easy to implement. + - the atomic behavaiour is lost. +C. + - the atomic behavaiour is kept. + - dirty and tricky. + - VFS checks whether the file is created correctly after calling + ->create(), which means this approach doesn't work. +D. + - easy to implement. + - the atomic behavaiour is lost. + - to open a file with superuser's credential and give it to a user + process is a bad idea, since the file object keeps the credential + in it. It may affect LSM or something. This approach doesn't work + either. + +The approach A is ideal, but it hard to implement. So here is a +variation of A, which is to be implemented. + +A-1. Introduce aufs_atomic_open() + - calls branch fs ->atomic_open() if exists. otherwise calls + vfs_create() and finish_open(). + - the demerit is that the several checks after branch fs + ->atomic_open() are lost. in the ordinary case, the checks are + done by VFS:do_last(), lookup_open() and atomic_open(). some can + be implemented in aufs, but not all I am afraid. diff --git b/Documentation/filesystems/aufs/design/03lookup.txt b/Documentation/filesystems/aufs/design/03lookup.txt new file mode 100644 index 0000000..8b8ac6e --- /dev/null +++ b/Documentation/filesystems/aufs/design/03lookup.txt @@ -0,0 +1,113 @@ + +# Copyright (C) 2005-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +Lookup in a Branch +---------------------------------------------------------------------- +Since aufs has a character of sub-VFS (see Introduction), it operates +lookup for branches as VFS does. It may be a heavy work. But almost all +lookup operation in aufs is the simplest case, ie. lookup only an entry +directly connected to its parent. Digging down the directory hierarchy +is unnecessary. VFS has a function lookup_one_len() for that use, and +aufs calls it. + +When a branch is a remote filesystem, aufs basically relies upon its +->d_revalidate(), also aufs forces the hardest revalidate tests for +them. +For d_revalidate, aufs implements three levels of revalidate tests. See +"Revalidate Dentry and UDBA" in detail. + + +Test Only the Highest One for the Directory Permission (dirperm1 option) +---------------------------------------------------------------------- +Let's try case study. +- aufs has two branches, upper readwrite and lower readonly. + /au = /rw + /ro +- "dirA" exists under /ro, but /rw. and its mode is 0700. +- user invoked "chmod a+rx /au/dirA" +- the internal copy-up is activated and "/rw/dirA" is created and its + permission bits are set to world readable. +- then "/au/dirA" becomes world readable? + +In this case, /ro/dirA is still 0700 since it exists in readonly branch, +or it may be a natively readonly filesystem. If aufs respects the lower +branch, it should not respond readdir request from other users. But user +allowed it by chmod. Should really aufs rejects showing the entries +under /ro/dirA? + +To be honest, I don't have a good solution for this case. So aufs +implements 'dirperm1' and 'nodirperm1' mount options, and leave it to +users. +When dirperm1 is specified, aufs checks only the highest one for the +directory permission, and shows the entries. Otherwise, as usual, checks +every dir existing on all branches and rejects the request. + +As a side effect, dirperm1 option improves the performance of aufs +because the number of permission check is reduced when the number of +branch is many. + + +Revalidate Dentry and UDBA (User's Direct Branch Access) +---------------------------------------------------------------------- +Generally VFS helpers re-validate a dentry as a part of lookup. +0. digging down the directory hierarchy. +1. lock the parent dir by its i_mutex. +2. lookup the final (child) entry. +3. revalidate it. +4. call the actual operation (create, unlink, etc.) +5. unlock the parent dir + +If the filesystem implements its ->d_revalidate() (step 3), then it is +called. Actually aufs implements it and checks the dentry on a branch is +still valid. +But it is not enough. Because aufs has to release the lock for the +parent dir on a branch at the end of ->lookup() (step 2) and +->d_revalidate() (step 3) while the i_mutex of the aufs dir is still +held by VFS. +If the file on a branch is changed directly, eg. bypassing aufs, after +aufs released the lock, then the subsequent operation may cause +something unpleasant result. + +This situation is a result of VFS architecture, ->lookup() and +->d_revalidate() is separated. But I never say it is wrong. It is a good +design from VFS's point of view. It is just not suitable for sub-VFS +character in aufs. + +Aufs supports such case by three level of revalidation which is +selectable by user. +1. Simple Revalidate + Addition to the native flow in VFS's, confirm the child-parent + relationship on the branch just after locking the parent dir on the + branch in the "actual operation" (step 4). When this validation + fails, aufs returns EBUSY. ->d_revalidate() (step 3) in aufs still + checks the validation of the dentry on branches. +2. Monitor Changes Internally by Inotify/Fsnotify + Addition to above, in the "actual operation" (step 4) aufs re-lookup + the dentry on the branch, and returns EBUSY if it finds different + dentry. + Additionally, aufs sets the inotify/fsnotify watch for every dir on branches + during it is in cache. When the event is notified, aufs registers a + function to kernel 'events' thread by schedule_work(). And the + function sets some special status to the cached aufs dentry and inode + private data. If they are not cached, then aufs has nothing to + do. When the same file is accessed through aufs (step 0-3) later, + aufs will detect the status and refresh all necessary data. + In this mode, aufs has to ignore the event which is fired by aufs + itself. +3. No Extra Validation + This is the simplest test and doesn't add any additional revalidation + test, and skip the revalidation in step 4. It is useful and improves + aufs performance when system surely hide the aufs branches from user, + by over-mounting something (or another method). diff --git b/Documentation/filesystems/aufs/design/04branch.txt b/Documentation/filesystems/aufs/design/04branch.txt new file mode 100644 index 0000000..5604ff8 --- /dev/null +++ b/Documentation/filesystems/aufs/design/04branch.txt @@ -0,0 +1,74 @@ + +# Copyright (C) 2005-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +Branch Manipulation + +Since aufs supports dynamic branch manipulation, ie. add/remove a branch +and changing its permission/attribute, there are a lot of works to do. + + +Add a Branch +---------------------------------------------------------------------- +o Confirm the adding dir exists outside of aufs, including loopback + mount, and its various attributes. +o Initialize the xino file and whiteout bases if necessary. + See struct.txt. + +o Check the owner/group/mode of the directory + When the owner/group/mode of the adding directory differs from the + existing branch, aufs issues a warning because it may impose a + security risk. + For example, when a upper writable branch has a world writable empty + top directory, a malicious user can create any files on the writable + branch directly, like copy-up and modify manually. If something like + /etc/{passwd,shadow} exists on the lower readonly branch but the upper + writable branch, and the writable branch is world-writable, then a + malicious guy may create /etc/passwd on the writable branch directly + and the infected file will be valid in aufs. + I am afraid it can be a security issue, but aufs can do nothing except + producing a warning. + + +Delete a Branch +---------------------------------------------------------------------- +o Confirm the deleting branch is not busy + To be general, there is one merit to adopt "remount" interface to + manipulate branches. It is to discard caches. At deleting a branch, + aufs checks the still cached (and connected) dentries and inodes. If + there are any, then they are all in-use. An inode without its + corresponding dentry can be alive alone (for example, inotify/fsnotify case). + + For the cached one, aufs checks whether the same named entry exists on + other branches. + If the cached one is a directory, because aufs provides a merged view + to users, as long as one dir is left on any branch aufs can show the + dir to users. In this case, the branch can be removed from aufs. + Otherwise aufs rejects deleting the branch. + + If any file on the deleting branch is opened by aufs, then aufs + rejects deleting. + + +Modify the Permission of a Branch +---------------------------------------------------------------------- +o Re-initialize or remove the xino file and whiteout bases if necessary. + See struct.txt. + +o rw --> ro: Confirm the modifying branch is not busy + Aufs rejects the request if any of these conditions are true. + - a file on the branch is mmap-ed. + - a regular file on the branch is opened for write and there is no + same named entry on the upper branch. diff --git b/Documentation/filesystems/aufs/design/05wbr_policy.txt b/Documentation/filesystems/aufs/design/05wbr_policy.txt new file mode 100644 index 0000000..1578469 --- /dev/null +++ b/Documentation/filesystems/aufs/design/05wbr_policy.txt @@ -0,0 +1,64 @@ + +# Copyright (C) 2005-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +Policies to Select One among Multiple Writable Branches +---------------------------------------------------------------------- +When the number of writable branch is more than one, aufs has to decide +the target branch for file creation or copy-up. By default, the highest +writable branch which has the parent (or ancestor) dir of the target +file is chosen (top-down-parent policy). +By user's request, aufs implements some other policies to select the +writable branch, for file creation several policies, round-robin, +most-free-space, and other policies. For copy-up, top-down-parent, +bottom-up-parent, bottom-up and others. + +As expected, the round-robin policy selects the branch in circular. When +you have two writable branches and creates 10 new files, 5 files will be +created for each branch. mkdir(2) systemcall is an exception. When you +create 10 new directories, all will be created on the same branch. +And the most-free-space policy selects the one which has most free +space among the writable branches. The amount of free space will be +checked by aufs internally, and users can specify its time interval. + +The policies for copy-up is more simple, +top-down-parent is equivalent to the same named on in create policy, +bottom-up-parent selects the writable branch where the parent dir +exists and the nearest upper one from the copyup-source, +bottom-up selects the nearest upper writable branch from the +copyup-source, regardless the existence of the parent dir. + +There are some rules or exceptions to apply these policies. +- If there is a readonly branch above the policy-selected branch and + the parent dir is marked as opaque (a variation of whiteout), or the + target (creating) file is whiteout-ed on the upper readonly branch, + then the result of the policy is ignored and the target file will be + created on the nearest upper writable branch than the readonly branch. +- If there is a writable branch above the policy-selected branch and + the parent dir is marked as opaque or the target file is whiteouted + on the branch, then the result of the policy is ignored and the target + file will be created on the highest one among the upper writable + branches who has diropq or whiteout. In case of whiteout, aufs removes + it as usual. +- link(2) and rename(2) systemcalls are exceptions in every policy. + They try selecting the branch where the source exists as possible + since copyup a large file will take long time. If it can't be, + ie. the branch where the source exists is readonly, then they will + follow the copyup policy. +- There is an exception for rename(2) when the target exists. + If the rename target exists, aufs compares the index of the branches + where the source and the target exists and selects the higher + one. If the selected branch is readonly, then aufs follows the + copyup policy. diff --git b/Documentation/filesystems/aufs/design/06dirren.dot b/Documentation/filesystems/aufs/design/06dirren.dot new file mode 100644 index 0000000..2d62bb6 --- /dev/null +++ b/Documentation/filesystems/aufs/design/06dirren.dot @@ -0,0 +1,31 @@ + +// to view this graph, run dot(1) command in GRAPHVIZ. + +digraph G { +node [shape=box]; +whinfo [label="detailed info file\n(lower_brid_root-hinum, h_inum, namelen, old name)"]; + +node [shape=oval]; + +aufs_rename -> whinfo [label="store/remove"]; + +node [shape=oval]; +inode_list [label="h_inum list in branch\ncache"]; + +node [shape=box]; +whinode [label="h_inum list file"]; + +node [shape=oval]; +brmgmt [label="br_add/del/mod/umount"]; + +brmgmt -> inode_list [label="create/remove"]; +brmgmt -> whinode [label="load/store"]; + +inode_list -> whinode [style=dashed,dir=both]; + +aufs_rename -> inode_list [label="add/del"]; + +aufs_lookup -> inode_list [label="search"]; + +aufs_lookup -> whinfo [label="load/remove"]; +} diff --git b/Documentation/filesystems/aufs/design/06dirren.txt b/Documentation/filesystems/aufs/design/06dirren.txt new file mode 100644 index 0000000..3037d77 --- /dev/null +++ b/Documentation/filesystems/aufs/design/06dirren.txt @@ -0,0 +1,102 @@ + +# Copyright (C) 2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +Special handling for renaming a directory (DIRREN) +---------------------------------------------------------------------- +First, let's assume we have a simple usecase. + +- /u = /rw + /ro +- /rw/dirA exists +- /ro/dirA and /ro/dirA/file exist too +- there is no dirB on both branches +- a user issues rename("dirA", "dirB") + +Now, what should aufs behave against this rename(2)? +There are a few possible cases. + +A. returns EROFS. + since dirA exists on a readonly branch which cannot be renamed. +B. returns EXDEV. + it is possible to copy-up dirA (only the dir itself), but the child + entries ("file" in this case) should not be. it must be a bad + approach to copy-up recursively. +C. returns a success. + even the branch /ro is readonly, aufs tries renaming it. Obviously it + is a violation of aufs' policy. +D. construct an extra information which indicates that /ro/dirA should + be handled as the name of dirB. + overlayfs has a similar feature called REDIRECT. + +Until now, aufs implements the case B only which returns EXDEV, and +expects the userspace application behaves like mv(1) which tries +issueing rename(2) recursively. + +A new aufs feature called DIRREN is introduced which implements the case +D. There are several "extra information" added. + +1. detailed info per renamed directory + path: /rw/dirB/$AUFS_WH_DR_INFO_PFX. +2. the inode-number list of directories on a branch + path: /rw/dirB/$AUFS_WH_DR_BRHINO + +The filename of "detailed info per directory" represents the lower +branch, and its format is +- a type of the branch id + one of these. + + uuid (not implemented yet) + + fsid + + dev +- the inode-number of the branch root dir + +And it contains these info in a single regular file. +- magic number +- branch's inode-number of the logically renamed dir +- the name of the before-renamed dir + +The "detailed info per directory" file is created in aufs rename(2), and +loaded in any lookup. +The info is considered in lookup for the matching case only. Here +"matching" means that the root of branch (in the info filename) is same +to the current looking-up branch. After looking-up the before-renamed +name, the inode-number is compared. And the matched dentry is used. + +The "inode-number list of directories" is a regular file which contains +simply the inode-numbers on the branch. The file is created or updated +in removing the branch, and loaded in adding the branch. Its lifetime is +equal to the branch. +The list is refered in lookup, and when the current target inode is +found in the list, the aufs tries loading the "detailed info per +directory" and get the changed and valid name of the dir. + +Theoretically these "extra informaiton" may be able to be put into XATTR +in the dir inode. But aufs doesn't choose this way because +1. XATTR may not be supported by the branch (or its configuration) +2. XATTR may have its size limit. +3. XATTR may be less easy to convert than a regular file, when the + format of the info is changed in the future. +At the same time, I agree that the regular file approach is much slower +than XATTR approach. So, in the future, aufs may take the XATTR or other +better approach. + +This DIRREN feature is enabled by aufs configuration, and is activated +by a new mount option. + +For the more complicated case, there is a work with UDBA option, which +is to dected the direct access to the branches (by-passing aufs) and to +maintain the cashes in aufs. Since a single cached aufs dentry may +contains two names, before- and after-rename, the name comparision in +UDBA handler may not work correctly. In this case, the behaviour will be +equivalen to udba=reval case. diff --git b/Documentation/filesystems/aufs/design/06fhsm.txt b/Documentation/filesystems/aufs/design/06fhsm.txt new file mode 100644 index 0000000..9216478 --- /dev/null +++ b/Documentation/filesystems/aufs/design/06fhsm.txt @@ -0,0 +1,120 @@ + +# Copyright (C) 2011-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + + +File-based Hierarchical Storage Management (FHSM) +---------------------------------------------------------------------- +Hierarchical Storage Management (or HSM) is a well-known feature in the +storage world. Aufs provides this feature as file-based with multiple +writable branches, based upon the principle of "Colder, the Lower". +Here the word "colder" means that the less used files, and "lower" means +that the position in the order of the stacked branches vertically. +These multiple writable branches are prioritized, ie. the topmost one +should be the fastest drive and be used heavily. + +o Characters in aufs FHSM story +- aufs itself and a new branch attribute. +- a new ioctl interface to move-down and to establish a connection with + the daemon ("move-down" is a converse of "copy-up"). +- userspace tool and daemon. + +The userspace daemon establishes a connection with aufs and waits for +the notification. The notified information is very similar to struct +statfs containing the number of consumed blocks and inodes. +When the consumed blocks/inodes of a branch exceeds the user-specified +upper watermark, the daemon activates its move-down process until the +consumed blocks/inodes reaches the user-specified lower watermark. + +The actual move-down is done by aufs based upon the request from +user-space since we need to maintain the inode number and the internal +pointer arrays in aufs. + +Currently aufs FHSM handles the regular files only. Additionally they +must not be hard-linked nor pseudo-linked. + + +o Cowork of aufs and the user-space daemon + During the userspace daemon established the connection, aufs sends a + small notification to it whenever aufs writes something into the + writable branch. But it may cost high since aufs issues statfs(2) + internally. So user can specify a new option to cache the + info. Actually the notification is controlled by these factors. + + the specified cache time. + + classified as "force" by aufs internally. + Until the specified time expires, aufs doesn't send the info + except the forced cases. When aufs decide forcing, the info is always + notified to userspace. + For example, the number of free inodes is generally large enough and + the shortage of it happens rarely. So aufs doesn't force the + notification when creating a new file, directory and others. This is + the typical case which aufs doesn't force. + When aufs writes the actual filedata and the files consumes any of new + blocks, the aufs forces notifying. + + +o Interfaces in aufs +- New branch attribute. + + fhsm + Specifies that the branch is managed by FHSM feature. In other word, + participant in the FHSM. + When nofhsm is set to the branch, it will not be the source/target + branch of the move-down operation. This attribute is set + independently from coo and moo attributes, and if you want full + FHSM, you should specify them as well. +- New mount option. + + fhsm_sec + Specifies a second to suppress many less important info to be + notified. +- New ioctl. + + AUFS_CTL_FHSM_FD + create a new file descriptor which userspace can read the notification + (a subset of struct statfs) from aufs. +- Module parameter 'brs' + It has to be set to 1. Otherwise the new mount option 'fhsm' will not + be set. +- mount helpers /sbin/mount.aufs and /sbin/umount.aufs + When there are two or more branches with fhsm attributes, + /sbin/mount.aufs invokes the user-space daemon and /sbin/umount.aufs + terminates it. As a result of remounting and branch-manipulation, the + number of branches with fhsm attribute can be one. In this case, + /sbin/mount.aufs will terminate the user-space daemon. + + +Finally the operation is done as these steps in kernel-space. +- make sure that, + + no one else is using the file. + + the file is not hard-linked. + + the file is not pseudo-linked. + + the file is a regular file. + + the parent dir is not opaqued. +- find the target writable branch. +- make sure the file is not whiteout-ed by the upper (than the target) + branch. +- make the parent dir on the target branch. +- mutex lock the inode on the branch. +- unlink the whiteout on the target branch (if exists). +- lookup and create the whiteout-ed temporary name on the target branch. +- copy the file as the whiteout-ed temporary name on the target branch. +- rename the whiteout-ed temporary name to the original name. +- unlink the file on the source branch. +- maintain the internal pointer array and the external inode number + table (XINO). +- maintain the timestamps and other attributes of the parent dir and the + file. + +And of course, in every step, an error may happen. So the operation +should restore the original file state after an error happens. diff --git b/Documentation/filesystems/aufs/design/06mmap.txt b/Documentation/filesystems/aufs/design/06mmap.txt new file mode 100644 index 0000000..8fe4b6c --- /dev/null +++ b/Documentation/filesystems/aufs/design/06mmap.txt @@ -0,0 +1,72 @@ + +# Copyright (C) 2005-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +mmap(2) -- File Memory Mapping +---------------------------------------------------------------------- +In aufs, the file-mapped pages are handled by a branch fs directly, no +interaction with aufs. It means aufs_mmap() calls the branch fs's +->mmap(). +This approach is simple and good, but there is one problem. +Under /proc, several entries show the mmapped files by its path (with +device and inode number), and the printed path will be the path on the +branch fs's instead of virtual aufs's. +This is not a problem in most cases, but some utilities lsof(1) (and its +user) may expect the path on aufs. + +To address this issue, aufs adds a new member called vm_prfile in struct +vm_area_struct (and struct vm_region). The original vm_file points to +the file on the branch fs in order to handle everything correctly as +usual. The new vm_prfile points to a virtual file in aufs, and the +show-functions in procfs refers to vm_prfile if it is set. +Also we need to maintain several other places where touching vm_file +such like +- fork()/clone() copies vma and the reference count of vm_file is + incremented. +- merging vma maintains the ref count too. + +This is not a good approach. It just fakes the printed path. But it +leaves all behaviour around f_mapping unchanged. This is surely an +advantage. +Actually aufs had adopted another complicated approach which calls +generic_file_mmap() and handles struct vm_operations_struct. In this +approach, aufs met a hard problem and I could not solve it without +switching the approach. + +There may be one more another approach which is +- bind-mount the branch-root onto the aufs-root internally +- grab the new vfsmount (ie. struct mount) +- lazy-umount the branch-root internally +- in open(2) the aufs-file, open the branch-file with the hidden + vfsmount (instead of the original branch's vfsmount) +- ideally this "bind-mount and lazy-umount" should be done atomically, + but it may be possible from userspace by the mount helper. + +Adding the internal hidden vfsmount and using it in opening a file, the +file path under /proc will be printed correctly. This approach looks +smarter, but is not possible I am afraid. +- aufs-root may be bind-mount later. when it happens, another hidden + vfsmount will be required. +- it is hard to get the chance to bind-mount and lazy-umount + + in kernel-space, FS can have vfsmount in open(2) via + file->f_path, and aufs can know its vfsmount. But several locks are + already acquired, and if aufs tries to bind-mount and lazy-umount + here, then it may cause a deadlock. + + in user-space, bind-mount doesn't invoke the mount helper. +- since /proc shows dev and ino, aufs has to give vma these info. it + means a new member vm_prinode will be necessary. this is essentially + equivalent to vm_prfile described above. + +I have to give up this "looks-smater" approach. diff --git b/Documentation/filesystems/aufs/design/06xattr.txt b/Documentation/filesystems/aufs/design/06xattr.txt new file mode 100644 index 0000000..37cdb4e --- /dev/null +++ b/Documentation/filesystems/aufs/design/06xattr.txt @@ -0,0 +1,96 @@ + +# Copyright (C) 2014-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program; if not, write to the Free Software +# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + + +Listing XATTR/EA and getting the value +---------------------------------------------------------------------- +For the inode standard attributes (owner, group, timestamps, etc.), aufs +shows the values from the topmost existing file. This behaviour is good +for the non-dir entries since the bahaviour exactly matches the shown +information. But for the directories, aufs considers all the same named +entries on the lower branches. Which means, if one of the lower entry +rejects readdir call, then aufs returns an error even if the topmost +entry allows it. This behaviour is necessary to respect the branch fs's +security, but can make users confused since the user-visible standard +attributes don't match the behaviour. +To address this issue, aufs has a mount option called dirperm1 which +checks the permission for the topmost entry only, and ignores the lower +entry's permission. + +A similar issue can happen around XATTR. +getxattr(2) and listxattr(2) families behave as if dirperm1 option is +always set. Otherwise these very unpleasant situation would happen. +- listxattr(2) may return the duplicated entries. +- users may not be able to remove or reset the XATTR forever, + + +XATTR/EA support in the internal (copy,move)-(up,down) +---------------------------------------------------------------------- +Generally the extended attributes of inode are categorized as these. +- "security" for LSM and capability. +- "system" for posix ACL, 'acl' mount option is required for the branch + fs generally. +- "trusted" for userspace, CAP_SYS_ADMIN is required. +- "user" for userspace, 'user_xattr' mount option is required for the + branch fs generally. + +Moreover there are some other categories. Aufs handles these rather +unpopular categories as the ordinary ones, ie. there is no special +condition nor exception. + +In copy-up, the support for XATTR on the dst branch may differ from the +src branch. In this case, the copy-up operation will get an error and +the original user operation which triggered the copy-up will fail. It +can happen that even all copy-up will fail. +When both of src and dst branches support XATTR and if an error occurs +during copying XATTR, then the copy-up should fail obviously. That is a +good reason and aufs should return an error to userspace. But when only +the src branch support that XATTR, aufs should not return an error. +For example, the src branch supports ACL but the dst branch doesn't +because the dst branch may natively un-support it or temporary +un-support it due to "noacl" mount option. Of course, the dst branch fs +may NOT return an error even if the XATTR is not supported. It is +totally up to the branch fs. + +Anyway when the aufs internal copy-up gets an error from the dst branch +fs, then aufs tries removing the just copied entry and returns the error +to the userspace. The worst case of this situation will be all copy-up +will fail. + +For the copy-up operation, there two basic approaches. +- copy the specified XATTR only (by category above), and return the + error unconditionally if it happens. +- copy all XATTR, and ignore the error on the specified category only. + +In order to support XATTR and to implement the correct behaviour, aufs +chooses the latter approach and introduces some new branch attributes, +"icexsec", "icexsys", "icextr", "icexusr", and "icexoth". +They correspond to the XATTR namespaces (see above). Additionally, to be +convenient, "icex" is also provided which means all "icex*" attributes +are set (here the word "icex" stands for "ignore copy-error on XATTR"). + +The meaning of these attributes is to ignore the error from setting +XATTR on that branch. +Note that aufs tries copying all XATTR unconditionally, and ignores the +error from the dst branch according to the specified attributes. + +Some XATTR may have its default value. The default value may come from +the parent dir or the environment. If the default value is set at the +file creating-time, it will be overwritten by copy-up. +Some contradiction may happen I am afraid. +Do we need another attribute to stop copying XATTR? I am unsure. For +now, aufs implements the branch attributes to ignore the error. diff --git b/Documentation/filesystems/aufs/design/07export.txt b/Documentation/filesystems/aufs/design/07export.txt new file mode 100644 index 0000000..cd4ee6b --- /dev/null +++ b/Documentation/filesystems/aufs/design/07export.txt @@ -0,0 +1,58 @@ + +# Copyright (C) 2005-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +Export Aufs via NFS +---------------------------------------------------------------------- +Here is an approach. +- like xino/xib, add a new file 'xigen' which stores aufs inode + generation. +- iget_locked(): initialize aufs inode generation for a new inode, and + store it in xigen file. +- destroy_inode(): increment aufs inode generation and store it in xigen + file. it is necessary even if it is not unlinked, because any data of + inode may be changed by UDBA. +- encode_fh(): for a root dir, simply return FILEID_ROOT. otherwise + build file handle by + + branch id (4 bytes) + + superblock generation (4 bytes) + + inode number (4 or 8 bytes) + + parent dir inode number (4 or 8 bytes) + + inode generation (4 bytes)) + + return value of exportfs_encode_fh() for the parent on a branch (4 + bytes) + + file handle for a branch (by exportfs_encode_fh()) +- fh_to_dentry(): + + find the index of a branch from its id in handle, and check it is + still exist in aufs. + + 1st level: get the inode number from handle and search it in cache. + + 2nd level: if not found in cache, get the parent inode number from + the handle and search it in cache. and then open the found parent + dir, find the matching inode number by vfs_readdir() and get its + name, and call lookup_one_len() for the target dentry. + + 3rd level: if the parent dir is not cached, call + exportfs_decode_fh() for a branch and get the parent on a branch, + build a pathname of it, convert it a pathname in aufs, call + path_lookup(). now aufs gets a parent dir dentry, then handle it as + the 2nd level. + + to open the dir, aufs needs struct vfsmount. aufs keeps vfsmount + for every branch, but not itself. to get this, (currently) aufs + searches in current->nsproxy->mnt_ns list. it may not be a good + idea, but I didn't get other approach. + + test the generation of the gotten inode. +- every inode operation: they may get EBUSY due to UDBA. in this case, + convert it into ESTALE for NFSD. +- readdir(): call lockdep_on/off() because filldir in NFSD calls + lookup_one_len(), vfs_getattr(), encode_fh() and others. diff --git b/Documentation/filesystems/aufs/design/08shwh.txt b/Documentation/filesystems/aufs/design/08shwh.txt new file mode 100644 index 0000000..7e07e26 --- /dev/null +++ b/Documentation/filesystems/aufs/design/08shwh.txt @@ -0,0 +1,52 @@ + +# Copyright (C) 2005-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +Show Whiteout Mode (shwh) +---------------------------------------------------------------------- +Generally aufs hides the name of whiteouts. But in some cases, to show +them is very useful for users. For instance, creating a new middle layer +(branch) by merging existing layers. + +(borrowing aufs1 HOW-TO from a user, Michael Towers) +When you have three branches, +- Bottom: 'system', squashfs (underlying base system), read-only +- Middle: 'mods', squashfs, read-only +- Top: 'overlay', ram (tmpfs), read-write + +The top layer is loaded at boot time and saved at shutdown, to preserve +the changes made to the system during the session. +When larger changes have been made, or smaller changes have accumulated, +the size of the saved top layer data grows. At this point, it would be +nice to be able to merge the two overlay branches ('mods' and 'overlay') +and rewrite the 'mods' squashfs, clearing the top layer and thus +restoring save and load speed. + +This merging is simplified by the use of another aufs mount, of just the +two overlay branches using the 'shwh' option. +# mount -t aufs -o ro,shwh,br:/livesys/overlay=ro+wh:/livesys/mods=rr+wh \ + aufs /livesys/merge_union + +A merged view of these two branches is then available at +/livesys/merge_union, and the new feature is that the whiteouts are +visible! +Note that in 'shwh' mode the aufs mount must be 'ro', which will disable +writing to all branches. Also the default mode for all branches is 'ro'. +It is now possible to save the combined contents of the two overlay +branches to a new squashfs, e.g.: +# mksquashfs /livesys/merge_union /path/to/newmods.squash + +This new squashfs archive can be stored on the boot device and the +initramfs will use it to replace the old one at the next boot. diff --git b/Documentation/filesystems/aufs/design/10dynop.txt b/Documentation/filesystems/aufs/design/10dynop.txt new file mode 100644 index 0000000..b7ba75d --- /dev/null +++ b/Documentation/filesystems/aufs/design/10dynop.txt @@ -0,0 +1,47 @@ + +# Copyright (C) 2010-2017 Junjiro R. Okajima +# +# This program is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of the License, or +# (at your option) any later version. +# +# This program is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +# GNU General Public License for more details. +# +# You should have received a copy of the GNU General Public License +# along with this program. If not, see . + +Dynamically customizable FS operations +---------------------------------------------------------------------- +Generally FS operations (struct inode_operations, struct +address_space_operations, struct file_operations, etc.) are defined as +"static const", but it never means that FS have only one set of +operation. Some FS have multiple sets of them. For instance, ext2 has +three sets, one for XIP, for NOBH, and for normal. +Since aufs overrides and redirects these operations, sometimes aufs has +to change its behaviour according to the branch FS type. More importantly +VFS acts differently if a function (member in the struct) is set or +not. It means aufs should have several sets of operations and select one +among them according to the branch FS definition. + +In order to solve this problem and not to affect the behaviour of VFS, +aufs defines these operations dynamically. For instance, aufs defines +dummy direct_IO function for struct address_space_operations, but it may +not be set to the address_space_operations actually. When the branch FS +doesn't have it, aufs doesn't set it to its address_space_operations +while the function definition itself is still alive. So the behaviour +itself will not change, and it will return an error when direct_IO is +not set. + +The lifetime of these dynamically generated operation object is +maintained by aufs branch object. When the branch is removed from aufs, +the reference counter of the object is decremented. When it reaches +zero, the dynamically generated operation object will be freed. + +This approach is designed to support AIO (io_submit), Direct I/O and +XIP (DAX) mainly. +Currently this approach is applied to address_space_operations for +regular files only. diff --git b/Documentation/power/tuxonice-internals.txt b/Documentation/power/tuxonice-internals.txt new file mode 100644 index 0000000..0c6a216 --- /dev/null +++ b/Documentation/power/tuxonice-internals.txt @@ -0,0 +1,532 @@ + TuxOnIce 4.0 Internal Documentation. + Updated to 23 March 2015 + +(Please note that incremental image support mentioned in this document is work +in progress. This document may need updating prior to the actual release of +4.0!) + +1. Introduction. + + TuxOnIce 4.0 is an addition to the Linux Kernel, designed to + allow the user to quickly shutdown and quickly boot a computer, without + needing to close documents or programs. It is equivalent to the + hibernate facility in some laptops. This implementation, however, + requires no special BIOS or hardware support. + + The code in these files is based upon the original implementation + prepared by Gabor Kuti and additional work by Pavel Machek and a + host of others. This code has been substantially reworked by Nigel + Cunningham, again with the help and testing of many others, not the + least of whom are Bernard Blackham and Michael Frank. At its heart, + however, the operation is essentially the same as Gabor's version. + +2. Overview of operation. + + The basic sequence of operations is as follows: + + a. Quiesce all other activity. + b. Ensure enough memory and storage space are available, and attempt + to free memory/storage if necessary. + c. Allocate the required memory and storage space. + d. Write the image. + e. Power down. + + There are a number of complicating factors which mean that things are + not as simple as the above would imply, however... + + o The activity of each process must be stopped at a point where it will + not be holding locks necessary for saving the image, or unexpectedly + restart operations due to something like a timeout and thereby make + our image inconsistent. + + o It is desirous that we sync outstanding I/O to disk before calculating + image statistics. This reduces corruption if one should suspend but + then not resume, and also makes later parts of the operation safer (see + below). + + o We need to get as close as we can to an atomic copy of the data. + Inconsistencies in the image will result in inconsistent memory contents at + resume time, and thus in instability of the system and/or file system + corruption. This would appear to imply a maximum image size of one half of + the amount of RAM, but we have a solution... (again, below). + + o In 2.6 and later, we choose to play nicely with the other suspend-to-disk + implementations. + +3. Detailed description of internals. + + a. Quiescing activity. + + Safely quiescing the system is achieved using three separate but related + aspects. + + First, we use the vanilla kerne's support for freezing processes. This code + is based on the observation that the vast majority of processes don't need + to run during suspend. They can be 'frozen'. The kernel therefore + implements a refrigerator routine, which processes enter and in which they + remain until the cycle is complete. Processes enter the refrigerator via + try_to_freeze() invocations at appropriate places. A process cannot be + frozen in any old place. It must not be holding locks that will be needed + for writing the image or freezing other processes. For this reason, + userspace processes generally enter the refrigerator via the signal + handling code, and kernel threads at the place in their event loops where + they drop locks and yield to other processes or sleep. The task of freezing + processes is complicated by the fact that there can be interdependencies + between processes. Freezing process A before process B may mean that + process B cannot be frozen, because it stops at waiting for process A + rather than in the refrigerator. This issue is seen where userspace waits + on freezeable kernel threads or fuse filesystem threads. To address this + issue, we implement the following algorithm for quiescing activity: + + - Freeze filesystems (including fuse - userspace programs starting + new requests are immediately frozen; programs already running + requests complete their work before being frozen in the next + step) + - Freeze userspace + - Thaw filesystems (this is safe now that userspace is frozen and no + fuse requests are outstanding). + - Invoke sys_sync (noop on fuse). + - Freeze filesystems + - Freeze kernel threads + + If we need to free memory, we thaw kernel threads and filesystems, but not + userspace. We can then free caches without worrying about deadlocks due to + swap files being on frozen filesystems or such like. + + b. Ensure enough memory & storage are available. + + We have a number of constraints to meet in order to be able to successfully + suspend and resume. + + First, the image will be written in two parts, described below. One of + these parts needs to have an atomic copy made, which of course implies a + maximum size of one half of the amount of system memory. The other part + ('pageset') is not atomically copied, and can therefore be as large or + small as desired. + + Second, we have constraints on the amount of storage available. In these + calculations, we may also consider any compression that will be done. The + cryptoapi module allows the user to configure an expected compression ratio. + + Third, the user can specify an arbitrary limit on the image size, in + megabytes. This limit is treated as a soft limit, so that we don't fail the + attempt to suspend if we cannot meet this constraint. + + c. Allocate the required memory and storage space. + + Having done the initial freeze, we determine whether the above constraints + are met, and seek to allocate the metadata for the image. If the constraints + are not met, or we fail to allocate the required space for the metadata, we + seek to free the amount of memory that we calculate is needed and try again. + We allow up to four iterations of this loop before aborting the cycle. If + we do fail, it should only be because of a bug in TuxOnIce's calculations + or the vanilla kernel code for freeing memory. + + These steps are merged together in the prepare_image function, found in + prepare_image.c. The functions are merged because of the cyclical nature + of the problem of calculating how much memory and storage is needed. Since + the data structures containing the information about the image must + themselves take memory and use storage, the amount of memory and storage + required changes as we prepare the image. Since the changes are not large, + only one or two iterations will be required to achieve a solution. + + The recursive nature of the algorithm is miminised by keeping user space + frozen while preparing the image, and by the fact that our records of which + pages are to be saved and which pageset they are saved in use bitmaps (so + that changes in number or fragmentation of the pages to be saved don't + feedback via changes in the amount of memory needed for metadata). The + recursiveness is thus limited to any extra slab pages allocated to store the + extents that record storage used, and the effects of seeking to free memory. + + d. Write the image. + + We previously mentioned the need to create an atomic copy of the data, and + the half-of-memory limitation that is implied in this. This limitation is + circumvented by dividing the memory to be saved into two parts, called + pagesets. + + Pageset2 contains most of the page cache - the pages on the active and + inactive LRU lists that aren't needed or modified while TuxOnIce is + running, so they can be safely written without an atomic copy. They are + therefore saved first and reloaded last. While saving these pages, + TuxOnIce carefully ensures that the work of writing the pages doesn't make + the image inconsistent. With the support for Kernel (Video) Mode Setting + going into the kernel at the time of writing, we need to check for pages + on the LRU that are used by KMS, and exclude them from pageset2. They are + atomically copied as part of pageset 1. + + Once pageset2 has been saved, we prepare to do the atomic copy of remaining + memory. As part of the preparation, we power down drivers, thereby providing + them with the opportunity to have their state recorded in the image. The + amount of memory allocated by drivers for this is usually negligible, but if + DRI is in use, video drivers may require significants amounts. Ideally we + would be able to query drivers while preparing the image as to the amount of + memory they will need. Unfortunately no such mechanism exists at the time of + writing. For this reason, TuxOnIce allows the user to set an + 'extra_pages_allowance', which is used to seek to ensure sufficient memory + is available for drivers at this point. TuxOnIce also lets the user set this + value to 0. In this case, a test driver suspend is done while preparing the + image, and the difference (plus a margin) used instead. TuxOnIce will also + automatically restart the hibernation process (twice at most) if it finds + that the extra pages allowance is not sufficient. It will then use what was + actually needed (plus a margin, again). Failure to hibernate should thus + be an extremely rare occurence. + + Having suspended the drivers, we save the CPU context before making an + atomic copy of pageset1, resuming the drivers and saving the atomic copy. + After saving the two pagesets, we just need to save our metadata before + powering down. + + As we mentioned earlier, the contents of pageset2 pages aren't needed once + they've been saved. We therefore use them as the destination of our atomic + copy. In the unlikely event that pageset1 is larger, extra pages are + allocated while the image is being prepared. This is normally only a real + possibility when the system has just been booted and the page cache is + small. + + This is where we need to be careful about syncing, however. Pageset2 will + probably contain filesystem meta data. If this is overwritten with pageset1 + and then a sync occurs, the filesystem will be corrupted - at least until + resume time and another sync of the restored data. Since there is a + possibility that the user might not resume or (may it never be!) that + TuxOnIce might oops, we do our utmost to avoid syncing filesystems after + copying pageset1. + + e. Incremental images + + TuxOnIce 4.0 introduces a new incremental image mode which changes things a + little. When incremental images are enabled, we save a 'normal' image the + first time we hibernate. One resume however, we do not free the image or + the associated storage. Instead, it is retained until the next attempt at + hibernating and a mechanism is enabled which is used to track which pages + of memory are modified between the two cycles. The modified pages can then + be added to the existing image, rather than unmodified pages being saved + again unnecessarily. + + Incremental image support is available in 64 bit Linux only, due to the + requirement for extra page flags. + + This support is accomplished in the following way: + + 1) Tracking of pages. + + The tracking of changed pages is accomplished using the page fault + mechanism. When we reach a point at which we want to start tracking + changes, most pages are marked read-only and also flagged as being + read-only because of this support. Since this cannot happen for every page + of RAM, some are marked as untracked and always treated as modified whn + preparing an incremental iamge. When a process attempts to modify a page + that is marked read-only in this way, a page fault occurs, with TuxOnIce + code marking the page writable and dirty before allowing the write to + continue. In this way, the effect of incremental images on performance is + minimised - a page only causes a fault once. Small modifications to the + page allocator further reduce the number of faults that occur - free pages + are not tracked; they are made writable and marked as dirty as part of + being allocated. + + 2) Saving the incremental image / atomicity. + + The page fault mechanism is also used to improve the means by which + atomicity of the image is acheived. When it is time to do an atomic copy, + the flags for pages are reset, with the result being that it is no longer + necessary for us to do an atomic of pageset1. Instead, we normally write + the uncopied pages to disk. When an attempt is made to modify a page that + has not yet been saved, the page-fault mechanism makes a copy of the page + prior to allowing the write. This copy is then written to disk. Likewise, + on resume, if a process attempts to write to a page that has been read + while the rest of the image is still being loaded, a copy of that page is + made prior to the write being allowed. At the end of loading the image, + modified pages can thus be restored to their 'atomic copy' contents prior + to restarting normal operation. We also mark pages that are yet to be read + as invalid PFNs, so that we can capture as a bug any attempt by a + half-restored kernel to access a page that hasn't yet been reloaded. + + f. Power down. + + Powering down uses standard kernel routines. TuxOnIce supports powering down + using the ACPI S3, S4 and S5 methods or the kernel's non-ACPI power-off. + Supporting suspend to ram (S3) as a power off option might sound strange, + but it allows the user to quickly get their system up and running again if + the battery doesn't run out (we just need to re-read the overwritten pages) + and if the battery does run out (or the user removes power), they can still + resume. + +4. Data Structures. + + TuxOnIce uses three main structures to store its metadata and configuration + information: + + a) Pageflags bitmaps. + + TuxOnIce records which pages will be in pageset1, pageset2, the destination + of the atomic copy and the source of the atomically restored image using + bitmaps. The code used is that written for swsusp, with small improvements + to match TuxOnIce's requirements. + + The pageset1 bitmap is thus easily stored in the image header for use at + resume time. + + As mentioned above, using bitmaps also means that the amount of memory and + storage required for recording the above information is constant. This + greatly simplifies the work of preparing the image. In earlier versions of + TuxOnIce, extents were used to record which pages would be stored. In that + case, however, eating memory could result in greater fragmentation of the + lists of pages, which in turn required more memory to store the extents and + more storage in the image header. These could in turn require further + freeing of memory, and another iteration. All of this complexity is removed + by having bitmaps. + + Bitmaps also make a lot of sense because TuxOnIce only ever iterates + through the lists. There is therefore no cost to not being able to find the + nth page in order 0 time. We only need to worry about the cost of finding + the n+1th page, given the location of the nth page. Bitwise optimisations + help here. + + b) Extents for block data. + + TuxOnIce supports writing the image to multiple block devices. In the case + of swap, multiple partitions and/or files may be in use, and we happily use + them all (with the exception of compcache pages, which we allocate but do + not use). This use of multiple block devices is accomplished as follows: + + Whatever the actual source of the allocated storage, the destination of the + image can be viewed in terms of one or more block devices, and on each + device, a list of sectors. To simplify matters, we only use contiguous, + PAGE_SIZE aligned sectors, like the swap code does. + + Since sector numbers on each bdev may well not start at 0, it makes much + more sense to use extents here. Contiguous ranges of pages can thus be + represented in the extents by contiguous values. + + Variations in block size are taken account of in transforming this data + into the parameters for bio submission. + + We can thus implement a layer of abstraction wherein the core of TuxOnIce + doesn't have to worry about which device we're currently writing to or + where in the device we are. It simply requests that the next page in the + pageset or header be written, leaving the details to this lower layer. + The lower layer remembers where in the sequence of devices and blocks each + pageset starts. The header always starts at the beginning of the allocated + storage. + + So extents are: + + struct extent { + unsigned long minimum, maximum; + struct extent *next; + } + + These are combined into chains of extents for a device: + + struct extent_chain { + int size; /* size of the extent ie sum (max-min+1) */ + int allocs, frees; + char *name; + struct extent *first, *last_touched; + }; + + For each bdev, we need to store a little more info (simplified definition): + + struct toi_bdev_info { + struct block_device *bdev; + + char uuid[17]; + dev_t dev_t; + int bmap_shift; + int blocks_per_page; + }; + + The uuid is the main means used to identify the device in the storage + image. This means we can cope with the dev_t representation of a device + changing between saving the image and restoring it, as may happen on some + bioses or in the LVM case. + + bmap_shift and blocks_per_page apply the effects of variations in blocks + per page settings for the filesystem and underlying bdev. For most + filesystems, these are the same, but for xfs, they can have independant + values. + + Combining these two structures together, we have everything we need to + record what devices and what blocks on each device are being used to + store the image, and to submit i/o using bio_submit. + + The last elements in the picture are a means of recording how the storage + is being used. + + We do this first and foremost by implementing a layer of abstraction on + top of the devices and extent chains which allows us to view however many + devices there might be as one long storage tape, with a single 'head' that + tracks a 'current position' on the tape: + + struct extent_iterate_state { + struct extent_chain *chains; + int num_chains; + int current_chain; + struct extent *current_extent; + unsigned long current_offset; + }; + + That is, *chains points to an array of size num_chains of extent chains. + For the filewriter, this is always a single chain. For the swapwriter, the + array is of size MAX_SWAPFILES. + + current_chain, current_extent and current_offset thus point to the current + index in the chains array (and into a matching array of struct + suspend_bdev_info), the current extent in that chain (to optimise access), + and the current value in the offset. + + The image is divided into three parts: + - The header + - Pageset 1 + - Pageset 2 + + The header always starts at the first device and first block. We know its + size before we begin to save the image because we carefully account for + everything that will be stored in it. + + The second pageset (LRU) is stored first. It begins on the next page after + the end of the header. + + The first pageset is stored second. It's start location is only known once + pageset2 has been saved, since pageset2 may be compressed as it is written. + This location is thus recorded at the end of saving pageset2. It is page + aligned also. + + Since this information is needed at resume time, and the location of extents + in memory will differ at resume time, this needs to be stored in a portable + way: + + struct extent_iterate_saved_state { + int chain_num; + int extent_num; + unsigned long offset; + }; + + We can thus implement a layer of abstraction wherein the core of TuxOnIce + doesn't have to worry about which device we're currently writing to or + where in the device we are. It simply requests that the next page in the + pageset or header be written, leaving the details to this layer, and + invokes the routines to remember and restore the position, without having + to worry about the details of how the data is arranged on disk or such like. + + c) Modules + + One aim in designing TuxOnIce was to make it flexible. We wanted to allow + for the implementation of different methods of transforming a page to be + written to disk and different methods of getting the pages stored. + + In early versions (the betas and perhaps Suspend1), compression support was + inlined in the image writing code, and the data structures and code for + managing swap were intertwined with the rest of the code. A number of people + had expressed interest in implementing image encryption, and alternative + methods of storing the image. + + In order to achieve this, TuxOnIce was given a modular design. + + A module is a single file which encapsulates the functionality needed + to transform a pageset of data (encryption or compression, for example), + or to write the pageset to a device. The former type of module is called + a 'page-transformer', the later a 'writer'. + + Modules are linked together in pipeline fashion. There may be zero or more + page transformers in a pipeline, and there is always exactly one writer. + The pipeline follows this pattern: + + --------------------------------- + | TuxOnIce Core | + --------------------------------- + | + | + --------------------------------- + | Page transformer 1 | + --------------------------------- + | + | + --------------------------------- + | Page transformer 2 | + --------------------------------- + | + | + --------------------------------- + | Writer | + --------------------------------- + + During the writing of an image, the core code feeds pages one at a time + to the first module. This module performs whatever transformations it + implements on the incoming data, completely consuming the incoming data and + feeding output in a similar manner to the next module. + + All routines are SMP safe, and the final result of the transformations is + written with an index (provided by the core) and size of the output by the + writer. As a result, we can have multithreaded I/O without needing to + worry about the sequence in which pages are written (or read). + + During reading, the pipeline works in the reverse direction. The core code + calls the first module with the address of a buffer which should be filled. + (Note that the buffer size is always PAGE_SIZE at this time). This module + will in turn request data from the next module and so on down until the + writer is made to read from the stored image. + + Part of definition of the structure of a module thus looks like this: + + int (*rw_init) (int rw, int stream_number); + int (*rw_cleanup) (int rw); + int (*write_chunk) (struct page *buffer_page); + int (*read_chunk) (struct page *buffer_page, int sync); + + It should be noted that the _cleanup routine may be called before the + full stream of data has been read or written. While writing the image, + the user may (depending upon settings) choose to abort suspending, and + if we are in the midst of writing the last portion of the image, a portion + of the second pageset may be reread. This may also happen if an error + occurs and we seek to abort the process of writing the image. + + The modular design is also useful in a number of other ways. It provides + a means where by we can add support for: + + - providing overall initialisation and cleanup routines; + - serialising configuration information in the image header; + - providing debugging information to the user; + - determining memory and image storage requirements; + - dis/enabling components at run-time; + - configuring the module (see below); + + ...and routines for writers specific to their work: + - Parsing a resume= location; + - Determining whether an image exists; + - Marking a resume as having been attempted; + - Invalidating an image; + + Since some parts of the core - the user interface and storage manager + support - have use for some of these functions, they are registered as + 'miscellaneous' modules as well. + + d) Sysfs data structures. + + This brings us naturally to support for configuring TuxOnIce. We desired to + provide a way to make TuxOnIce as flexible and configurable as possible. + The user shouldn't have to reboot just because they want to now hibernate to + a file instead of a partition, for example. + + To accomplish this, TuxOnIce implements a very generic means whereby the + core and modules can register new sysfs entries. All TuxOnIce entries use + a single _store and _show routine, both of which are found in + tuxonice_sysfs.c in the kernel/power directory. These routines handle the + most common operations - getting and setting the values of bits, integers, + longs, unsigned longs and strings in one place, and allow overrides for + customised get and set options as well as side-effect routines for all + reads and writes. + + When combined with some simple macros, a new sysfs entry can then be defined + in just a couple of lines: + + SYSFS_INT("progress_granularity", SYSFS_RW, &progress_granularity, 1, + 2048, 0, NULL), + + This defines a sysfs entry named "progress_granularity" which is rw and + allows the user to access an integer stored at &progress_granularity, giving + it a value between 1 and 2048 inclusive. + + Sysfs entries are registered under /sys/power/tuxonice, and entries for + modules are located in a subdirectory named after the module. + diff --git b/Documentation/power/tuxonice.txt b/Documentation/power/tuxonice.txt new file mode 100644 index 0000000..3bf0575 --- /dev/null +++ b/Documentation/power/tuxonice.txt @@ -0,0 +1,948 @@ + --- TuxOnIce, version 3.0 --- + +1. What is it? +2. Why would you want it? +3. What do you need to use it? +4. Why not just use the version already in the kernel? +5. How do you use it? +6. What do all those entries in /sys/power/tuxonice do? +7. How do you get support? +8. I think I've found a bug. What should I do? +9. When will XXX be supported? +10 How does it work? +11. Who wrote TuxOnIce? + +1. What is it? + + Imagine you're sitting at your computer, working away. For some reason, you + need to turn off your computer for a while - perhaps it's time to go home + for the day. When you come back to your computer next, you're going to want + to carry on where you left off. Now imagine that you could push a button and + have your computer store the contents of its memory to disk and power down. + Then, when you next start up your computer, it loads that image back into + memory and you can carry on from where you were, just as if you'd never + turned the computer off. You have far less time to start up, no reopening of + applications or finding what directory you put that file in yesterday. + That's what TuxOnIce does. + + TuxOnIce has a long heritage. It began life as work by Gabor Kuti, who, + with some help from Pavel Machek, got an early version going in 1999. The + project was then taken over by Florent Chabaud while still in alpha version + numbers. Nigel Cunningham came on the scene when Florent was unable to + continue, moving the project into betas, then 1.0, 2.0 and so on up to + the present series. During the 2.0 series, the name was contracted to + Suspend2 and the website suspend2.net created. Beginning around July 2007, + a transition to calling the software TuxOnIce was made, to seek to help + make it clear that TuxOnIce is more concerned with hibernation than suspend + to ram. + + Pavel Machek's swsusp code, which was merged around 2.5.17 retains the + original name, and was essentially a fork of the beta code until Rafael + Wysocki came on the scene in 2005 and began to improve it further. + +2. Why would you want it? + + Why wouldn't you want it? + + Being able to save the state of your system and quickly restore it improves + your productivity - you get a useful system in far less time than through + the normal boot process. You also get to be completely 'green', using zero + power, or as close to that as possible (the computer may still provide + minimal power to some devices, so they can initiate a power on, but that + will be the same amount of power as would be used if you told the computer + to shutdown. + +3. What do you need to use it? + + a. Kernel Support. + + i) The TuxOnIce patch. + + TuxOnIce is part of the Linux Kernel. This version is not part of Linus's + 2.6 tree at the moment, so you will need to download the kernel source and + apply the latest patch. Having done that, enable the appropriate options in + make [menu|x]config (under Power Management Options - look for "Enhanced + Hibernation"), compile and install your kernel. TuxOnIce works with SMP, + Highmem, preemption, fuse filesystems, x86-32, PPC and x86_64. + + TuxOnIce patches are available from http://tuxonice.net. + + ii) Compression support. + + Compression support is implemented via the cryptoapi. You will therefore want + to select any Cryptoapi transforms that you want to use on your image from + the Cryptoapi menu while configuring your kernel. We recommend the use of the + LZO compression method - it is very fast and still achieves good compression. + + You can also tell TuxOnIce to write its image to an encrypted and/or + compressed filesystem/swap partition. In that case, you don't need to do + anything special for TuxOnIce when it comes to kernel configuration. + + iii) Configuring other options. + + While you're configuring your kernel, try to configure as much as possible + to build as modules. We recommend this because there are a number of drivers + that are still in the process of implementing proper power management + support. In those cases, the best way to work around their current lack is + to build them as modules and remove the modules while hibernating. You might + also bug the driver authors to get their support up to speed, or even help! + + b. Storage. + + i) Swap. + + TuxOnIce can store the hibernation image in your swap partition, a swap file or + a combination thereof. Whichever combination you choose, you will probably + want to create enough swap space to store the largest image you could have, + plus the space you'd normally use for swap. A good rule of thumb would be + to calculate the amount of swap you'd want without using TuxOnIce, and then + add the amount of memory you have. This swapspace can be arranged in any way + you'd like. It can be in one partition or file, or spread over a number. The + only requirement is that they be active when you start a hibernation cycle. + + There is one exception to this requirement. TuxOnIce has the ability to turn + on one swap file or partition at the start of hibernating and turn it back off + at the end. If you want to ensure you have enough memory to store a image + when your memory is fully used, you might want to make one swap partition or + file for 'normal' use, and another for TuxOnIce to activate & deactivate + automatically. (Further details below). + + ii) Normal files. + + TuxOnIce includes a 'file allocator'. The file allocator can store your + image in a simple file. Since Linux has the concept of everything being a + file, this is more powerful than it initially sounds. If, for example, you + were to set up a network block device file, you could hibernate to a network + server. This has been tested and works to a point, but nbd itself isn't + stateless enough for our purposes. + + Take extra care when setting up the file allocator. If you just type + commands without thinking and then try to hibernate, you could cause + irreversible corruption on your filesystems! Make sure you have backups. + + Most people will only want to hibernate to a local file. To achieve that, do + something along the lines of: + + echo "TuxOnIce" > /hibernation-file + dd if=/dev/zero bs=1M count=512 >> /hibernation-file + + This will create a 512MB file called /hibernation-file. To get TuxOnIce to use + it: + + echo /hibernation-file > /sys/power/tuxonice/file/target + + Then + + cat /sys/power/tuxonice/resume + + Put the results of this into your bootloader's configuration (see also step + C, below): + + ---EXAMPLE-ONLY-DON'T-COPY-AND-PASTE--- + # cat /sys/power/tuxonice/resume + file:/dev/hda2:0x1e001 + + In this example, we would edit the append= line of our lilo.conf|menu.lst + so that it included: + + resume=file:/dev/hda2:0x1e001 + ---EXAMPLE-ONLY-DON'T-COPY-AND-PASTE--- + + For those who are thinking 'Could I make the file sparse?', the answer is + 'No!'. At the moment, there is no way for TuxOnIce to fill in the holes in + a sparse file while hibernating. In the longer term (post merge!), I'd like + to change things so that the file could be dynamically resized and have + holes filled as needed. Right now, however, that's not possible and not a + priority. + + c. Bootloader configuration. + + Using TuxOnIce also requires that you add an extra parameter to + your lilo.conf or equivalent. Here's an example for a swap partition: + + append="resume=swap:/dev/hda1" + + This would tell TuxOnIce that /dev/hda1 is a swap partition you + have. TuxOnIce will use the swap signature of this partition as a + pointer to your data when you hibernate. This means that (in this example) + /dev/hda1 doesn't need to be _the_ swap partition where all of your data + is actually stored. It just needs to be a swap partition that has a + valid signature. + + You don't need to have a swap partition for this purpose. TuxOnIce + can also use a swap file, but usage is a little more complex. Having made + your swap file, turn it on and do + + cat /sys/power/tuxonice/swap/headerlocations + + (this assumes you've already compiled your kernel with TuxOnIce + support and booted it). The results of the cat command will tell you + what you need to put in lilo.conf: + + For swap partitions like /dev/hda1, simply use resume=/dev/hda1. + For swapfile `swapfile`, use resume=swap:/dev/hda2:0x242d. + + If the swapfile changes for any reason (it is moved to a different + location, it is deleted and recreated, or the filesystem is + defragmented) then you will have to check + /sys/power/tuxonice/swap/headerlocations for a new resume_block value. + + Once you've compiled and installed the kernel and adjusted your bootloader + configuration, you should only need to reboot for the most basic part + of TuxOnIce to be ready. + + If you only compile in the swap allocator, or only compile in the file + allocator, you don't need to add the "swap:" part of the resume= + parameters above. resume=/dev/hda2:0x242d will work just as well. If you + have compiled both and your storage is on swap, you can also use this + format (the swap allocator is the default allocator). + + When compiling your kernel, one of the options in the 'Power Management + Support' menu, just above the 'Enhanced Hibernation (TuxOnIce)' entry is + called 'Default resume partition'. This can be used to set a default value + for the resume= parameter. + + d. The hibernate script. + + Since the driver model in 2.6 kernels is still being developed, you may need + to do more than just configure TuxOnIce. Users of TuxOnIce usually start the + process via a script which prepares for the hibernation cycle, tells the + kernel to do its stuff and then restore things afterwards. This script might + involve: + + - Switching to a text console and back if X doesn't like the video card + status on resume. + - Un/reloading drivers that don't play well with hibernation. + + Note that you might not be able to unload some drivers if there are + processes using them. You might have to kill off processes that hold + devices open. Hint: if your X server accesses an USB mouse, doing a + 'chvt' to a text console releases the device and you can unload the + module. + + Check out the latest script (available on tuxonice.net). + + e. The userspace user interface. + + TuxOnIce has very limited support for displaying status if you only apply + the kernel patch - it can printk messages, but that is all. In addition, + some of the functions mentioned in this document (such as cancelling a cycle + or performing interactive debugging) are unavailable. To utilise these + functions, or simply get a nice display, you need the 'userui' component. + Userui comes in three flavours, usplash, fbsplash and text. Text should + work on any console. Usplash and fbsplash require the appropriate + (distro specific?) support. + + To utilise a userui, TuxOnIce just needs to be told where to find the + userspace binary: + + echo "/usr/local/sbin/tuxoniceui_fbsplash" > /sys/power/tuxonice/user_interface/program + + The hibernate script can do this for you, and a default value for this + setting can be configured when compiling the kernel. This path is also + stored in the image header, so if you have an initrd or initramfs, you can + use the userui during the first part of resuming (prior to the atomic + restore) by putting the binary in the same path in your initrd/ramfs. + Alternatively, you can put it in a different location and do an echo + similar to the above prior to the echo > do_resume. The value saved in the + image header will then be ignored. + +4. Why not just use the version already in the kernel? + + The version in the vanilla kernel has a number of drawbacks. The most + serious of these are: + - it has a maximum image size of 1/2 total memory; + - it doesn't allocate storage until after it has snapshotted memory. + This means that you can't be sure hibernating will work until you + see it start to write the image; + - it does not allow you to press escape to cancel a cycle; + - it does not allow you to press escape to cancel resuming; + - it does not allow you to automatically swapon a file when + starting a cycle; + - it does not allow you to use multiple swap partitions or files; + - it does not allow you to use ordinary files; + - it just invalidates an image and continues to boot if you + accidentally boot the wrong kernel after hibernating; + - it doesn't support any sort of nice display while hibernating; + - it is moving toward requiring that you have an initrd/initramfs + to ever have a hope of resuming (uswsusp). While uswsusp will + address some of the concerns above, it won't address all of them, + and will be more complicated to get set up; + - it doesn't have support for suspend-to-both (write a hibernation + image, then suspend to ram; I think this is known as ReadySafe + under M$). + +5. How do you use it? + + A hibernation cycle can be started directly by doing: + + echo > /sys/power/tuxonice/do_hibernate + + In practice, though, you'll probably want to use the hibernate script + to unload modules, configure the kernel the way you like it and so on. + In that case, you'd do (as root): + + hibernate + + See the hibernate script's man page for more details on the options it + takes. + + If you're using the text or splash user interface modules, one feature of + TuxOnIce that you might find useful is that you can press Escape at any time + during hibernating, and the process will be aborted. + + Due to the way hibernation works, this means you'll have your system back and + perfectly usable almost instantly. The only exception is when it's at the + very end of writing the image. Then it will need to reload a small (usually + 4-50MBs, depending upon the image characteristics) portion first. + + Likewise, when resuming, you can press escape and resuming will be aborted. + The computer will then powerdown again according to settings at that time for + the powerdown method or rebooting. + + You can change the settings for powering down while the image is being + written by pressing 'R' to toggle rebooting and 'O' to toggle between + suspending to ram and powering down completely). + + If you run into problems with resuming, adding the "noresume" option to + the kernel command line will let you skip the resume step and recover your + system. This option shouldn't normally be needed, because TuxOnIce modifies + the image header prior to the atomic restore, and will thus prompt you + if it detects that you've tried to resume an image before (this flag is + removed if you press Escape to cancel a resume, so you won't be prompted + then). + + Recent kernels (2.6.24 onwards) add support for resuming from a different + kernel to the one that was hibernated (thanks to Rafael for his work on + this - I've just embraced and enhanced the support for TuxOnIce). This + should further reduce the need for you to use the noresume option. + +6. What do all those entries in /sys/power/tuxonice do? + + /sys/power/tuxonice is the directory which contains files you can use to + tune and configure TuxOnIce to your liking. The exact contents of + the directory will depend upon the version of TuxOnIce you're + running and the options you selected at compile time. In the following + descriptions, names in brackets refer to compile time options. + (Note that they're all dependant upon you having selected CONFIG_TUXONICE + in the first place!). + + Since the values of these settings can open potential security risks, the + writeable ones are accessible only to the root user. You may want to + configure sudo to allow you to invoke your hibernate script as an ordinary + user. + + - alloc/failure_test + + This debugging option provides a way of testing TuxOnIce's handling of + memory allocation failures. Each allocation type that TuxOnIce makes has + been given a unique number (see the source code). Echo the appropriate + number into this entry, and when TuxOnIce attempts to do that allocation, + it will pretend there was a failure and act accordingly. + + - alloc/find_max_mem_allocated + + This debugging option will cause TuxOnIce to find the maximum amount of + memory it used during a cycle, and report that information in debugging + information at the end of the cycle. + + - alt_resume_param + + Instead of powering down after writing a hibernation image, TuxOnIce + supports resuming from a different image. This entry lets you set the + location of the signature for that image (the resume= value you'd use + for it). Using an alternate image and keep_image mode, you can do things + like using an alternate image to power down an uninterruptible power + supply. + + - block_io/target_outstanding_io + + This value controls the amount of memory that the block I/O code says it + needs when the core code is calculating how much memory is needed for + hibernating and for resuming. It doesn't directly control the amount of + I/O that is submitted at any one time - that depends on the amount of + available memory (we may have more available than we asked for), the + throughput that is being achieved and the ability of the CPU to keep up + with disk throughput (particularly where we're compressing pages). + + - checksum/enabled + + Use cryptoapi hashing routines to verify that Pageset2 pages don't change + while we're saving the first part of the image, and to get any pages that + do change resaved in the atomic copy. This should normally not be needed, + but if you're seeing issues, please enable this. If your issues stop you + being able to resume, enable this option, hibernate and cancel the cycle + after the atomic copy is done. If the debugging info shows a non-zero + number of pages resaved, please report this to Nigel. + + - compression/algorithm + + Set the cryptoapi algorithm used for compressing the image. + + - compression/expected_compression + + These values allow you to set an expected compression ratio, which TuxOnice + will use in calculating whether it meets constraints on the image size. If + this expected compression ratio is not attained, the hibernation cycle will + abort, so it is wise to allow some spare. You can see what compression + ratio is achieved in the logs after hibernating. + + - debug_info: + + This file returns information about your configuration that may be helpful + in diagnosing problems with hibernating. + + - did_suspend_to_both: + + This file can be used when you hibernate with powerdown method 3 (ie suspend + to ram after writing the image). There can be two outcomes in this case. We + can resume from the suspend-to-ram before the battery runs out, or we can run + out of juice and and up resuming like normal. This entry lets you find out, + post resume, which way we went. If the value is 1, we resumed from suspend + to ram. This can be useful when actions need to be run post suspend-to-ram + that don't need to be run if we did the normal resume from power off. + + - do_hibernate: + + When anything is written to this file, the kernel side of TuxOnIce will + begin to attempt to write an image to disk and power down. You'll normally + want to run the hibernate script instead, to get modules unloaded first. + + - do_resume: + + When anything is written to this file TuxOnIce will attempt to read and + restore an image. If there is no image, it will return almost immediately. + If an image exists, the echo > will never return. Instead, the original + kernel context will be restored and the original echo > do_hibernate will + return. + + - */enabled + + These option can be used to temporarily disable various parts of TuxOnIce. + + - extra_pages_allowance + + When TuxOnIce does its atomic copy, it calls the driver model suspend + and resume methods. If you have DRI enabled with a driver such as fglrx, + this can result in the driver allocating a substantial amount of memory + for storing its state. Extra_pages_allowance tells TuxOnIce how much + extra memory it should ensure is available for those allocations. If + your attempts at hibernating end with a message in dmesg indicating that + insufficient extra pages were allowed, you need to increase this value. + + - file/target: + + Read this value to get the current setting. Write to it to point TuxOnice + at a new storage location for the file allocator. See section 3.b.ii above + for details of how to set up the file allocator. + + - freezer_test + + This entry can be used to get TuxOnIce to just test the freezer and prepare + an image without actually doing a hibernation cycle. It is useful for + diagnosing freezing and image preparation issues. + + - full_pageset2 + + TuxOnIce divides the pages that are stored in an image into two sets. The + difference between the two sets is that pages in pageset 1 are atomically + copied, and pages in pageset 2 are written to disk without being copied + first. A page CAN be written to disk without being copied first if and only + if its contents will not be modified or used at any time after userspace + processes are frozen. A page MUST be in pageset 1 if its contents are + modified or used at any time after userspace processes have been frozen. + + Normally (ie if this option is enabled), TuxOnIce will put all pages on the + per-zone LRUs in pageset2, then remove those pages used by any userspace + user interface helper and TuxOnIce storage manager that are running, + together with pages used by the GEM memory manager introduced around 2.6.28 + kernels. + + If this option is disabled, a much more conservative approach will be taken. + The only pages in pageset2 will be those belonging to userspace processes, + with the exclusion of those belonging to the TuxOnIce userspace helpers + mentioned above. This will result in a much smaller pageset2, and will + therefore result in smaller images than are possible with this option + enabled. + + - ignore_rootfs + + TuxOnIce records which device is mounted as the root filesystem when + writing the hibernation image. It will normally check at resume time that + this device isn't already mounted - that would be a cause of filesystem + corruption. In some particular cases (RAM based root filesystems), you + might want to disable this check. This option allows you to do that. + + - image_exists: + + Can be used in a script to determine whether a valid image exists at the + location currently pointed to by resume=. Returns up to three lines. + The first is whether an image exists (-1 for unsure, otherwise 0 or 1). + If an image eixsts, additional lines will return the machine and version. + Echoing anything to this entry removes any current image. + + - image_size_limit: + + The maximum size of hibernation image written to disk, measured in megabytes + (1024*1024). + + - last_result: + + The result of the last hibernation cycle, as defined in + include/linux/suspend-debug.h with the values SUSPEND_ABORTED to + SUSPEND_KEPT_IMAGE. This is a bitmask. + + - late_cpu_hotplug: + + This sysfs entry controls whether cpu hotplugging is done - as normal - just + before (unplug) and after (replug) the atomic copy/restore (so that all + CPUs/cores are available for multithreaded I/O). The alternative is to + unplug all secondary CPUs/cores at the start of hibernating/resuming, and + replug them at the end of resuming. No multithreaded I/O will be possible in + this configuration, but the odd machine has been reported to require it. + + - lid_file: + + This determines which ACPI button file we look in to determine whether the + lid is open or closed after resuming from suspend to disk or power off. + If the entry is set to "lid/LID", we'll open /proc/acpi/button/lid/LID/state + and check its contents at the appropriate moment. See post_wake_state below + for more details on how this entry is used. + + - log_everything (CONFIG_PM_DEBUG): + + Setting this option results in all messages printed being logged. Normally, + only a subset are logged, so as to not slow the process and not clutter the + logs. Useful for debugging. It can be toggled during a cycle by pressing + 'L'. + + - no_load_direct: + + This is a debugging option. If, when loading the atomically copied pages of + an image, TuxOnIce finds that the destination address for a page is free, + it will normally allocate the image, load the data directly into that + address and skip it in the atomic restore. If this option is disabled, the + page will be loaded somewhere else and atomically restored like other pages. + + - no_flusher_thread: + + When doing multithreaded I/O (see below), the first online CPU can be used + to _just_ submit compressed pages when writing the image, rather than + compressing and submitting data. This option is normally disabled, but has + been included because Nigel would like to see whether it will be more useful + as the number of cores/cpus in computers increases. + + - no_multithreaded_io: + + TuxOnIce will normally create one thread per cpu/core on your computer, + each of which will then perform I/O. This will generally result in + throughput that's the maximum the storage medium can handle. There + shouldn't be any reason to disable multithreaded I/O now, but this option + has been retained for debugging purposes. + + - no_pageset2 + + See the entry for full_pageset2 above for an explanation of pagesets. + Enabling this option causes TuxOnIce to do an atomic copy of all pages, + thereby limiting the maximum image size to 1/2 of memory, as swsusp does. + + - no_pageset2_if_unneeded + + See the entry for full_pageset2 above for an explanation of pagesets. + Enabling this option causes TuxOnIce to act like no_pageset2 was enabled + if and only it isn't needed anyway. This option may still make TuxOnIce + less reliable because pageset2 pages are normally used to store the + atomic copy - drivers that want to do allocations of larger amounts of + memory in one shot will be more likely to find that those amounts aren't + available if this option is enabled. + + - pause_between_steps (CONFIG_PM_DEBUG): + + This option is used during debugging, to make TuxOnIce pause between + each step of the process. It is ignored when the nice display is on. + + - post_wake_state: + + TuxOnIce provides support for automatically waking after a user-selected + delay, and using a different powerdown method if the lid is still closed. + (Yes, we're assuming a laptop). This entry lets you choose what state + should be entered next. The values are those described under + powerdown_method, below. It can be used to suspend to RAM after hibernating, + then powerdown properly (say) 20 minutes. It can also be used to power down + properly, then wake at (say) 6.30am and suspend to RAM until you're ready + to use the machine. + + - powerdown_method: + + Used to select a method by which TuxOnIce should powerdown after writing the + image. Currently: + + 0: Don't use ACPI to power off. + 3: Attempt to enter Suspend-to-ram. + 4: Attempt to enter ACPI S4 mode. + 5: Attempt to power down via ACPI S5 mode. + + Note that these options are highly dependant upon your hardware & software: + + 3: When succesful, your machine suspends to ram instead of powering off. + The advantage of using this mode is that it doesn't matter whether your + battery has enough charge to make it through to your next resume. If it + lasts, you will simply resume from suspend to ram (and the image on disk + will be discarded). If the battery runs out, you will resume from disk + instead. The disadvantage is that it takes longer than a normal + suspend-to-ram to enter the state, since the suspend-to-disk image needs + to be written first. + 4/5: When successful, your machine will be off and comsume (almost) no power. + But it might still react to some external events like opening the lid or + trafic on a network or usb device. For the bios, resume is then the same + as warm boot, similar to a situation where you used the command `reboot' + to reboot your machine. If your machine has problems on warm boot or if + you want to protect your machine with the bios password, this is probably + not the right choice. Mode 4 may be necessary on some machines where ACPI + wake up methods need to be run to properly reinitialise hardware after a + hibernation cycle. + 0: Switch the machine completely off. The only possible wakeup is the power + button. For the bios, resume is then the same as a cold boot, in + particular you would have to provide your bios boot password if your + machine uses that feature for booting. + + - progressbar_granularity_limit: + + This option can be used to limit the granularity of the progress bar + displayed with a bootsplash screen. The value is the maximum number of + steps. That is, 10 will make the progress bar jump in 10% increments. + + - reboot: + + This option causes TuxOnIce to reboot rather than powering down + at the end of saving an image. It can be toggled during a cycle by pressing + 'R'. + + - resume: + + This sysfs entry can be used to read and set the location in which TuxOnIce + will look for the signature of an image - the value set using resume= at + boot time or CONFIG_PM_STD_PARTITION ("Default resume partition"). By + writing to this file as well as modifying your bootloader's configuration + file (eg menu.lst), you can set or reset the location of your image or the + method of storing the image without rebooting. + + - replace_swsusp (CONFIG_TOI_REPLACE_SWSUSP): + + This option makes + + echo disk > /sys/power/state + + activate TuxOnIce instead of swsusp. Regardless of whether this option is + enabled, any invocation of swsusp's resume time trigger will cause TuxOnIce + to check for an image too. This is due to the fact that at resume time, we + can't know whether this option was enabled until we see if an image is there + for us to resume from. (And when an image exists, we don't care whether we + did replace swsusp anyway - we just want to resume). + + - resume_commandline: + + This entry can be read after resuming to see the commandline that was used + when resuming began. You might use this to set up two bootloader entries + that are the same apart from the fact that one includes a extra append= + argument "at_work=1". You could then grep resume_commandline in your + post-resume scripts and configure networking (for example) differently + depending upon whether you're at home or work. resume_commandline can be + set to arbitrary text if you wish to remove sensitive contents. + + - swap/swapfilename: + + This entry is used to specify the swapfile or partition that + TuxOnIce will attempt to swapon/swapoff automatically. Thus, if + I normally use /dev/hda1 for swap, and want to use /dev/hda2 for specifically + for my hibernation image, I would + + echo /dev/hda2 > /sys/power/tuxonice/swap/swapfile + + /dev/hda2 would then be automatically swapon'd and swapoff'd. Note that the + swapon and swapoff occur while other processes are frozen (including kswapd) + so this swap file will not be used up when attempting to free memory. The + parition/file is also given the highest priority, so other swapfiles/partitions + will only be used to save the image when this one is filled. + + The value of this file is used by headerlocations along with any currently + activated swapfiles/partitions. + + - swap/headerlocations: + + This option tells you the resume= options to use for swap devices you + currently have activated. It is particularly useful when you only want to + use a swap file to store your image. See above for further details. + + - test_bio + + This is a debugging option. When enabled, TuxOnIce will not hibernate. + Instead, when asked to write an image, it will skip the atomic copy, + just doing the writing of the image and then returning control to the + user at the point where it would have powered off. This is useful for + testing throughput in different configurations. + + - test_filter_speed + + This is a debugging option. When enabled, TuxOnIce will not hibernate. + Instead, when asked to write an image, it will not write anything or do + an atomic copy, but will only run any enabled compression algorithm on the + data that would have been written (the source pages of the atomic copy in + the case of pageset 1). This is useful for comparing the performance of + compression algorithms and for determining the extent to which an upgrade + to your storage method would improve hibernation speed. + + - user_interface/debug_sections (CONFIG_PM_DEBUG): + + This value, together with the console log level, controls what debugging + information is displayed. The console log level determines the level of + detail, and this value determines what detail is displayed. This value is + a bit vector, and the meaning of the bits can be found in the kernel tree + in include/linux/tuxonice.h. It can be overridden using the kernel's + command line option suspend_dbg. + + - user_interface/default_console_level (CONFIG_PM_DEBUG): + + This determines the value of the console log level at the start of a + hibernation cycle. If debugging is compiled in, the console log level can be + changed during a cycle by pressing the digit keys. Meanings are: + + 0: Nice display. + 1: Nice display plus numerical progress. + 2: Errors only. + 3: Low level debugging info. + 4: Medium level debugging info. + 5: High level debugging info. + 6: Verbose debugging info. + + - user_interface/enable_escape: + + Setting this to "1" will enable you abort a hibernation cycle or resuming by + pressing escape, "0" (default) disables this feature. Note that enabling + this option means that you cannot initiate a hibernation cycle and then walk + away from your computer, expecting it to be secure. With feature disabled, + you can validly have this expectation once TuxOnice begins to write the + image to disk. (Prior to this point, it is possible that TuxOnice might + about because of failure to freeze all processes or because constraints + on its ability to save the image are not met). + + - user_interface/program + + This entry is used to tell TuxOnice what userspace program to use for + providing a user interface while hibernating. The program uses a netlink + socket to pass messages back and forward to the kernel, allowing all of the + functions formerly implemented in the kernel user interface components. + + - version: + + The version of TuxOnIce you have compiled into the currently running kernel. + + - wake_alarm_dir: + + As mentioned above (post_wake_state), TuxOnIce supports automatically waking + after some delay. This entry allows you to select which wake alarm to use. + It should contain the value "rtc0" if you're wanting to use + /sys/class/rtc/rtc0. + + - wake_delay: + + This value determines the delay from the end of writing the image until the + wake alarm is triggered. You can set an absolute time by writing the desired + time into /sys/class/rtc//wakealarm and leaving these values + empty. + + Note that for the wakeup to actually occur, you may need to modify entries + in /proc/acpi/wakeup. This is done by echoing the name of the button in the + first column (eg PBTN) into the file. + +7. How do you get support? + + Glad you asked. TuxOnIce is being actively maintained and supported + by Nigel (the guy doing most of the kernel coding at the moment), Bernard + (who maintains the hibernate script and userspace user interface components) + and its users. + + Resources availble include HowTos, FAQs and a Wiki, all available via + tuxonice.net. You can find the mailing lists there. + +8. I think I've found a bug. What should I do? + + By far and a way, the most common problems people have with TuxOnIce + related to drivers not having adequate power management support. In this + case, it is not a bug with TuxOnIce, but we can still help you. As we + mentioned above, such issues can usually be worked around by building the + functionality as modules and unloading them while hibernating. Please visit + the Wiki for up-to-date lists of known issues and work arounds. + + If this information doesn't help, try running: + + hibernate --bug-report + + ..and sending the output to the users mailing list. + + Good information on how to provide us with useful information from an + oops is found in the file REPORTING-BUGS, in the top level directory + of the kernel tree. If you get an oops, please especially note the + information about running what is printed on the screen through ksymoops. + The raw information is useless. + +9. When will XXX be supported? + + If there's a feature missing from TuxOnIce that you'd like, feel free to + ask. We try to be obliging, within reason. + + Patches are welcome. Please send to the list. + +10. How does it work? + + TuxOnIce does its work in a number of steps. + + a. Freezing system activity. + + The first main stage in hibernating is to stop all other activity. This is + achieved in stages. Processes are considered in fours groups, which we will + describe in reverse order for clarity's sake: Threads with the PF_NOFREEZE + flag, kernel threads without this flag, userspace processes with the + PF_SYNCTHREAD flag and all other processes. The first set (PF_NOFREEZE) are + untouched by the refrigerator code. They are allowed to run during hibernating + and resuming, and are used to support user interaction, storage access or the + like. Other kernel threads (those unneeded while hibernating) are frozen last. + This leaves us with userspace processes that need to be frozen. When a + process enters one of the *_sync system calls, we set a PF_SYNCTHREAD flag on + that process for the duration of that call. Processes that have this flag are + frozen after processes without it, so that we can seek to ensure that dirty + data is synced to disk as quickly as possible in a situation where other + processes may be submitting writes at the same time. Freezing the processes + that are submitting data stops new I/O from being submitted. Syncthreads can + then cleanly finish their work. So the order is: + + - Userspace processes without PF_SYNCTHREAD or PF_NOFREEZE; + - Userspace processes with PF_SYNCTHREAD (they won't have NOFREEZE); + - Kernel processes without PF_NOFREEZE. + + b. Eating memory. + + For a successful hibernation cycle, you need to have enough disk space to store the + image and enough memory for the various limitations of TuxOnIce's + algorithm. You can also specify a maximum image size. In order to attain + to those constraints, TuxOnIce may 'eat' memory. If, after freezing + processes, the constraints aren't met, TuxOnIce will thaw all the + other processes and begin to eat memory until its calculations indicate + the constraints are met. It will then freeze processes again and recheck + its calculations. + + c. Allocation of storage. + + Next, TuxOnIce allocates the storage that will be used to save + the image. + + The core of TuxOnIce knows nothing about how or where pages are stored. We + therefore request the active allocator (remember you might have compiled in + more than one!) to allocate enough storage for our expect image size. If + this request cannot be fulfilled, we eat more memory and try again. If it + is fulfiled, we seek to allocate additional storage, just in case our + expected compression ratio (if any) isn't achieved. This time, however, we + just continue if we can't allocate enough storage. + + If these calls to our allocator change the characteristics of the image + such that we haven't allocated enough memory, we also loop. (The allocator + may well need to allocate space for its storage information). + + d. Write the first part of the image. + + TuxOnIce stores the image in two sets of pages called 'pagesets'. + Pageset 2 contains pages on the active and inactive lists; essentially + the page cache. Pageset 1 contains all other pages, including the kernel. + We use two pagesets for one important reason: We need to make an atomic copy + of the kernel to ensure consistency of the image. Without a second pageset, + that would limit us to an image that was at most half the amount of memory + available. Using two pagesets allows us to store a full image. Since pageset + 2 pages won't be needed in saving pageset 1, we first save pageset 2 pages. + We can then make our atomic copy of the remaining pages using both pageset 2 + pages and any other pages that are free. While saving both pagesets, we are + careful not to corrupt the image. Among other things, we use lowlevel block + I/O routines that don't change the pagecache contents. + + The next step, then, is writing pageset 2. + + e. Suspending drivers and storing processor context. + + Having written pageset2, TuxOnIce calls the power management functions to + notify drivers of the hibernation, and saves the processor state in preparation + for the atomic copy of memory we are about to make. + + f. Atomic copy. + + At this stage, everything else but the TuxOnIce code is halted. Processes + are frozen or idling, drivers are quiesced and have stored (ideally and where + necessary) their configuration in memory we are about to atomically copy. + In our lowlevel architecture specific code, we have saved the CPU state. + We can therefore now do our atomic copy before resuming drivers etc. + + g. Save the atomic copy (pageset 1). + + TuxOnice can then write the atomic copy of the remaining pages. Since we + have copied the pages into other locations, we can continue to use the + normal block I/O routines without fear of corruption our image. + + f. Save the image header. + + Nearly there! We save our settings and other parameters needed for + reloading pageset 1 in an 'image header'. We also tell our allocator to + serialise its data at this stage, so that it can reread the image at resume + time. + + g. Set the image header. + + Finally, we edit the header at our resume= location. The signature is + changed by the allocator to reflect the fact that an image exists, and to + point to the start of that data if necessary (swap allocator). + + h. Power down. + + Or reboot if we're debugging and the appropriate option is selected. + + Whew! + + Reloading the image. + -------------------- + + Reloading the image is essentially the reverse of all the above. We load + our copy of pageset 1, being careful to choose locations that aren't going + to be overwritten as we copy it back (We start very early in the boot + process, so there are no other processes to quiesce here). We then copy + pageset 1 back to its original location in memory and restore the process + context. We are now running with the original kernel. Next, we reload the + pageset 2 pages, free the memory and swap used by TuxOnIce, restore + the pageset header and restart processes. Sounds easy in comparison to + hibernating, doesn't it! + + There is of course more to TuxOnIce than this, but this explanation + should be a good start. If there's interest, I'll write further + documentation on range pages and the low level I/O. + +11. Who wrote TuxOnIce? + + (Answer based on the writings of Florent Chabaud, credits in files and + Nigel's limited knowledge; apologies to anyone missed out!) + + The main developers of TuxOnIce have been... + + Gabor Kuti + Pavel Machek + Florent Chabaud + Bernard Blackham + Nigel Cunningham + + Significant portions of swsusp, the code in the vanilla kernel which + TuxOnIce enhances, have been worked on by Rafael Wysocki. Thanks should + also be expressed to him. + + The above mentioned developers have been aided in their efforts by a host + of hundreds, if not thousands of testers and people who have submitted bug + fixes & suggestions. Of special note are the efforts of Michael Frank, who + had his computers repetitively hibernate and resume for literally tens of + thousands of cycles and developed scripts to stress the system and test + TuxOnIce far beyond the point most of us (Nigel included!) would consider + testing. His efforts have contributed as much to TuxOnIce as any of the + names above. diff --git b/Documentation/scheduler/sched-BFS.txt b/Documentation/scheduler/sched-BFS.txt new file mode 100644 index 0000000..c028200 --- /dev/null +++ b/Documentation/scheduler/sched-BFS.txt @@ -0,0 +1,351 @@ +BFS - The Brain Fuck Scheduler by Con Kolivas. + +Goals. + +The goal of the Brain Fuck Scheduler, referred to as BFS from here on, is to +completely do away with the complex designs of the past for the cpu process +scheduler and instead implement one that is very simple in basic design. +The main focus of BFS is to achieve excellent desktop interactivity and +responsiveness without heuristics and tuning knobs that are difficult to +understand, impossible to model and predict the effect of, and when tuned to +one workload cause massive detriment to another. + + +Design summary. + +BFS is best described as a single runqueue, O(n) lookup, earliest effective +virtual deadline first design, loosely based on EEVDF (earliest eligible virtual +deadline first) and my previous Staircase Deadline scheduler. Each component +shall be described in order to understand the significance of, and reasoning for +it. The codebase when the first stable version was released was approximately +9000 lines less code than the existing mainline linux kernel scheduler (in +2.6.31). This does not even take into account the removal of documentation and +the cgroups code that is not used. + +Design reasoning. + +The single runqueue refers to the queued but not running processes for the +entire system, regardless of the number of CPUs. The reason for going back to +a single runqueue design is that once multiple runqueues are introduced, +per-CPU or otherwise, there will be complex interactions as each runqueue will +be responsible for the scheduling latency and fairness of the tasks only on its +own runqueue, and to achieve fairness and low latency across multiple CPUs, any +advantage in throughput of having CPU local tasks causes other disadvantages. +This is due to requiring a very complex balancing system to at best achieve some +semblance of fairness across CPUs and can only maintain relatively low latency +for tasks bound to the same CPUs, not across them. To increase said fairness +and latency across CPUs, the advantage of local runqueue locking, which makes +for better scalability, is lost due to having to grab multiple locks. + +A significant feature of BFS is that all accounting is done purely based on CPU +used and nowhere is sleep time used in any way to determine entitlement or +interactivity. Interactivity "estimators" that use some kind of sleep/run +algorithm are doomed to fail to detect all interactive tasks, and to falsely tag +tasks that aren't interactive as being so. The reason for this is that it is +close to impossible to determine that when a task is sleeping, whether it is +doing it voluntarily, as in a userspace application waiting for input in the +form of a mouse click or otherwise, or involuntarily, because it is waiting for +another thread, process, I/O, kernel activity or whatever. Thus, such an +estimator will introduce corner cases, and more heuristics will be required to +cope with those corner cases, introducing more corner cases and failed +interactivity detection and so on. Interactivity in BFS is built into the design +by virtue of the fact that tasks that are waking up have not used up their quota +of CPU time, and have earlier effective deadlines, thereby making it very likely +they will preempt any CPU bound task of equivalent nice level. See below for +more information on the virtual deadline mechanism. Even if they do not preempt +a running task, because the rr interval is guaranteed to have a bound upper +limit on how long a task will wait for, it will be scheduled within a timeframe +that will not cause visible interface jitter. + + +Design details. + +Task insertion. + +BFS inserts tasks into each relevant queue as an O(1) insertion into a double +linked list. On insertion, *every* running queue is checked to see if the newly +queued task can run on any idle queue, or preempt the lowest running task on the +system. This is how the cross-CPU scheduling of BFS achieves significantly lower +latency per extra CPU the system has. In this case the lookup is, in the worst +case scenario, O(n) where n is the number of CPUs on the system. + +Data protection. + +BFS has one single lock protecting the process local data of every task in the +global queue. Thus every insertion, removal and modification of task data in the +global runqueue needs to grab the global lock. However, once a task is taken by +a CPU, the CPU has its own local data copy of the running process' accounting +information which only that CPU accesses and modifies (such as during a +timer tick) thus allowing the accounting data to be updated lockless. Once a +CPU has taken a task to run, it removes it from the global queue. Thus the +global queue only ever has, at most, + + (number of tasks requesting cpu time) - (number of logical CPUs) + 1 + +tasks in the global queue. This value is relevant for the time taken to look up +tasks during scheduling. This will increase if many tasks with CPU affinity set +in their policy to limit which CPUs they're allowed to run on if they outnumber +the number of CPUs. The +1 is because when rescheduling a task, the CPU's +currently running task is put back on the queue. Lookup will be described after +the virtual deadline mechanism is explained. + +Virtual deadline. + +The key to achieving low latency, scheduling fairness, and "nice level" +distribution in BFS is entirely in the virtual deadline mechanism. The one +tunable in BFS is the rr_interval, or "round robin interval". This is the +maximum time two SCHED_OTHER (or SCHED_NORMAL, the common scheduling policy) +tasks of the same nice level will be running for, or looking at it the other +way around, the longest duration two tasks of the same nice level will be +delayed for. When a task requests cpu time, it is given a quota (time_slice) +equal to the rr_interval and a virtual deadline. The virtual deadline is +offset from the current time in jiffies by this equation: + + jiffies + (prio_ratio * rr_interval) + +The prio_ratio is determined as a ratio compared to the baseline of nice -20 +and increases by 10% per nice level. The deadline is a virtual one only in that +no guarantee is placed that a task will actually be scheduled by this time, but +it is used to compare which task should go next. There are three components to +how a task is next chosen. First is time_slice expiration. If a task runs out +of its time_slice, it is descheduled, the time_slice is refilled, and the +deadline reset to that formula above. Second is sleep, where a task no longer +is requesting CPU for whatever reason. The time_slice and deadline are _not_ +adjusted in this case and are just carried over for when the task is next +scheduled. Third is preemption, and that is when a newly waking task is deemed +higher priority than a currently running task on any cpu by virtue of the fact +that it has an earlier virtual deadline than the currently running task. The +earlier deadline is the key to which task is next chosen for the first and +second cases. Once a task is descheduled, it is put back on the queue, and an +O(n) lookup of all queued-but-not-running tasks is done to determine which has +the earliest deadline and that task is chosen to receive CPU next. + +The CPU proportion of different nice tasks works out to be approximately the + + (prio_ratio difference)^2 + +The reason it is squared is that a task's deadline does not change while it is +running unless it runs out of time_slice. Thus, even if the time actually +passes the deadline of another task that is queued, it will not get CPU time +unless the current running task deschedules, and the time "base" (jiffies) is +constantly moving. + +Task lookup. + +BFS has 103 priority queues. 100 of these are dedicated to the static priority +of realtime tasks, and the remaining 3 are, in order of best to worst priority, +SCHED_ISO (isochronous), SCHED_NORMAL, and SCHED_IDLEPRIO (idle priority +scheduling). When a task of these priorities is queued, a bitmap of running +priorities is set showing which of these priorities has tasks waiting for CPU +time. When a CPU is made to reschedule, the lookup for the next task to get +CPU time is performed in the following way: + +First the bitmap is checked to see what static priority tasks are queued. If +any realtime priorities are found, the corresponding queue is checked and the +first task listed there is taken (provided CPU affinity is suitable) and lookup +is complete. If the priority corresponds to a SCHED_ISO task, they are also +taken in FIFO order (as they behave like SCHED_RR). If the priority corresponds +to either SCHED_NORMAL or SCHED_IDLEPRIO, then the lookup becomes O(n). At this +stage, every task in the runlist that corresponds to that priority is checked +to see which has the earliest set deadline, and (provided it has suitable CPU +affinity) it is taken off the runqueue and given the CPU. If a task has an +expired deadline, it is taken and the rest of the lookup aborted (as they are +chosen in FIFO order). + +Thus, the lookup is O(n) in the worst case only, where n is as described +earlier, as tasks may be chosen before the whole task list is looked over. + + +Scalability. + +The major limitations of BFS will be that of scalability, as the separate +runqueue designs will have less lock contention as the number of CPUs rises. +However they do not scale linearly even with separate runqueues as multiple +runqueues will need to be locked concurrently on such designs to be able to +achieve fair CPU balancing, to try and achieve some sort of nice-level fairness +across CPUs, and to achieve low enough latency for tasks on a busy CPU when +other CPUs would be more suited. BFS has the advantage that it requires no +balancing algorithm whatsoever, as balancing occurs by proxy simply because +all CPUs draw off the global runqueue, in priority and deadline order. Despite +the fact that scalability is _not_ the prime concern of BFS, it both shows very +good scalability to smaller numbers of CPUs and is likely a more scalable design +at these numbers of CPUs. + +It also has some very low overhead scalability features built into the design +when it has been deemed their overhead is so marginal that they're worth adding. +The first is the local copy of the running process' data to the CPU it's running +on to allow that data to be updated lockless where possible. Then there is +deference paid to the last CPU a task was running on, by trying that CPU first +when looking for an idle CPU to use the next time it's scheduled. Finally there +is the notion of cache locality beyond the last running CPU. The sched_domains +information is used to determine the relative virtual "cache distance" that +other CPUs have from the last CPU a task was running on. CPUs with shared +caches, such as SMT siblings, or multicore CPUs with shared caches, are treated +as cache local. CPUs without shared caches are treated as not cache local, and +CPUs on different NUMA nodes are treated as very distant. This "relative cache +distance" is used by modifying the virtual deadline value when doing lookups. +Effectively, the deadline is unaltered between "cache local" CPUs, doubled for +"cache distant" CPUs, and quadrupled for "very distant" CPUs. The reasoning +behind the doubling of deadlines is as follows. The real cost of migrating a +task from one CPU to another is entirely dependant on the cache footprint of +the task, how cache intensive the task is, how long it's been running on that +CPU to take up the bulk of its cache, how big the CPU cache is, how fast and +how layered the CPU cache is, how fast a context switch is... and so on. In +other words, it's close to random in the real world where we do more than just +one sole workload. The only thing we can be sure of is that it's not free. So +BFS uses the principle that an idle CPU is a wasted CPU and utilising idle CPUs +is more important than cache locality, and cache locality only plays a part +after that. Doubling the effective deadline is based on the premise that the +"cache local" CPUs will tend to work on the same tasks up to double the number +of cache local CPUs, and once the workload is beyond that amount, it is likely +that none of the tasks are cache warm anywhere anyway. The quadrupling for NUMA +is a value I pulled out of my arse. + +When choosing an idle CPU for a waking task, the cache locality is determined +according to where the task last ran and then idle CPUs are ranked from best +to worst to choose the most suitable idle CPU based on cache locality, NUMA +node locality and hyperthread sibling business. They are chosen in the +following preference (if idle): + +* Same core, idle or busy cache, idle threads +* Other core, same cache, idle or busy cache, idle threads. +* Same node, other CPU, idle cache, idle threads. +* Same node, other CPU, busy cache, idle threads. +* Same core, busy threads. +* Other core, same cache, busy threads. +* Same node, other CPU, busy threads. +* Other node, other CPU, idle cache, idle threads. +* Other node, other CPU, busy cache, idle threads. +* Other node, other CPU, busy threads. + +This shows the SMT or "hyperthread" awareness in the design as well which will +choose a real idle core first before a logical SMT sibling which already has +tasks on the physical CPU. + +Early benchmarking of BFS suggested scalability dropped off at the 16 CPU mark. +However this benchmarking was performed on an earlier design that was far less +scalable than the current one so it's hard to know how scalable it is in terms +of both CPUs (due to the global runqueue) and heavily loaded machines (due to +O(n) lookup) at this stage. Note that in terms of scalability, the number of +_logical_ CPUs matters, not the number of _physical_ CPUs. Thus, a dual (2x) +quad core (4X) hyperthreaded (2X) machine is effectively a 16X. Newer benchmark +results are very promising indeed, without needing to tweak any knobs, features +or options. Benchmark contributions are most welcome. + + +Features + +As the initial prime target audience for BFS was the average desktop user, it +was designed to not need tweaking, tuning or have features set to obtain benefit +from it. Thus the number of knobs and features has been kept to an absolute +minimum and should not require extra user input for the vast majority of cases. +There are precisely 2 tunables, and 2 extra scheduling policies. The rr_interval +and iso_cpu tunables, and the SCHED_ISO and SCHED_IDLEPRIO policies. In addition +to this, BFS also uses sub-tick accounting. What BFS does _not_ now feature is +support for CGROUPS. The average user should neither need to know what these +are, nor should they need to be using them to have good desktop behaviour. + +rr_interval + +There is only one "scheduler" tunable, the round robin interval. This can be +accessed in + + /proc/sys/kernel/rr_interval + +The value is in milliseconds, and the default value is set to 6 on a +uniprocessor machine, and automatically set to a progressively higher value on +multiprocessor machines. The reasoning behind increasing the value on more CPUs +is that the effective latency is decreased by virtue of there being more CPUs on +BFS (for reasons explained above), and increasing the value allows for less +cache contention and more throughput. Valid values are from 1 to 1000 +Decreasing the value will decrease latencies at the cost of decreasing +throughput, while increasing it will improve throughput, but at the cost of +worsening latencies. The accuracy of the rr interval is limited by HZ resolution +of the kernel configuration. Thus, the worst case latencies are usually slightly +higher than this actual value. The default value of 6 is not an arbitrary one. +It is based on the fact that humans can detect jitter at approximately 7ms, so +aiming for much lower latencies is pointless under most circumstances. It is +worth noting this fact when comparing the latency performance of BFS to other +schedulers. Worst case latencies being higher than 7ms are far worse than +average latencies not being in the microsecond range. + +Isochronous scheduling. + +Isochronous scheduling is a unique scheduling policy designed to provide +near-real-time performance to unprivileged (ie non-root) users without the +ability to starve the machine indefinitely. Isochronous tasks (which means +"same time") are set using, for example, the schedtool application like so: + + schedtool -I -e amarok + +This will start the audio application "amarok" as SCHED_ISO. How SCHED_ISO works +is that it has a priority level between true realtime tasks and SCHED_NORMAL +which would allow them to preempt all normal tasks, in a SCHED_RR fashion (ie, +if multiple SCHED_ISO tasks are running, they purely round robin at rr_interval +rate). However if ISO tasks run for more than a tunable finite amount of time, +they are then demoted back to SCHED_NORMAL scheduling. This finite amount of +time is the percentage of _total CPU_ available across the machine, configurable +as a percentage in the following "resource handling" tunable (as opposed to a +scheduler tunable): + + /proc/sys/kernel/iso_cpu + +and is set to 70% by default. It is calculated over a rolling 5 second average +Because it is the total CPU available, it means that on a multi CPU machine, it +is possible to have an ISO task running as realtime scheduling indefinitely on +just one CPU, as the other CPUs will be available. Setting this to 100 is the +equivalent of giving all users SCHED_RR access and setting it to 0 removes the +ability to run any pseudo-realtime tasks. + +A feature of BFS is that it detects when an application tries to obtain a +realtime policy (SCHED_RR or SCHED_FIFO) and the caller does not have the +appropriate privileges to use those policies. When it detects this, it will +give the task SCHED_ISO policy instead. Thus it is transparent to the user. +Because some applications constantly set their policy as well as their nice +level, there is potential for them to undo the override specified by the user +on the command line of setting the policy to SCHED_ISO. To counter this, once +a task has been set to SCHED_ISO policy, it needs superuser privileges to set +it back to SCHED_NORMAL. This will ensure the task remains ISO and all child +processes and threads will also inherit the ISO policy. + +Idleprio scheduling. + +Idleprio scheduling is a scheduling policy designed to give out CPU to a task +_only_ when the CPU would be otherwise idle. The idea behind this is to allow +ultra low priority tasks to be run in the background that have virtually no +effect on the foreground tasks. This is ideally suited to distributed computing +clients (like setiathome, folding, mprime etc) but can also be used to start +a video encode or so on without any slowdown of other tasks. To avoid this +policy from grabbing shared resources and holding them indefinitely, if it +detects a state where the task is waiting on I/O, the machine is about to +suspend to ram and so on, it will transiently schedule them as SCHED_NORMAL. As +per the Isochronous task management, once a task has been scheduled as IDLEPRIO, +it cannot be put back to SCHED_NORMAL without superuser privileges. Tasks can +be set to start as SCHED_IDLEPRIO with the schedtool command like so: + + schedtool -D -e ./mprime + +Subtick accounting. + +It is surprisingly difficult to get accurate CPU accounting, and in many cases, +the accounting is done by simply determining what is happening at the precise +moment a timer tick fires off. This becomes increasingly inaccurate as the +timer tick frequency (HZ) is lowered. It is possible to create an application +which uses almost 100% CPU, yet by being descheduled at the right time, records +zero CPU usage. While the main problem with this is that there are possible +security implications, it is also difficult to determine how much CPU a task +really does use. BFS tries to use the sub-tick accounting from the TSC clock, +where possible, to determine real CPU usage. This is not entirely reliable, but +is far more likely to produce accurate CPU usage data than the existing designs +and will not show tasks as consuming no CPU usage when they actually are. Thus, +the amount of CPU reported as being used by BFS will more accurately represent +how much CPU the task itself is using (as is shown for example by the 'time' +application), so the reported values may be quite different to other schedulers. +Values reported as the 'load' are more prone to problems with this design, but +per process values are closer to real usage. When comparing throughput of BFS +to other designs, it is important to compare the actual completed work in terms +of total wall clock time taken and total work done, rather than the reported +"cpu usage". + + +Con Kolivas Fri Aug 27 2010 diff --git b/Documentation/scheduler/sched-MuQSS.txt b/Documentation/scheduler/sched-MuQSS.txt new file mode 100644 index 0000000..7c18444 --- /dev/null +++ b/Documentation/scheduler/sched-MuQSS.txt @@ -0,0 +1,347 @@ +MuQSS - The Multiple Queue Skiplist Scheduler by Con Kolivas. + +MuQSS is a per-cpu runqueue variant of the original BFS scheduler with +one 8 level skiplist per runqueue, and fine grained locking for much more +scalability. + + +Goals. + +The goal of the Multiple Queue Skiplist Scheduler, referred to as MuQSS from +here on (pronounced mux) is to completely do away with the complex designs of +the past for the cpu process scheduler and instead implement one that is very +simple in basic design. The main focus of MuQSS is to achieve excellent desktop +interactivity and responsiveness without heuristics and tuning knobs that are +difficult to understand, impossible to model and predict the effect of, and when +tuned to one workload cause massive detriment to another, while still being +scalable to many CPUs and processes. + + +Design summary. + +MuQSS is best described as per-cpu multiple runqueue, O(log n) insertion, O(1) +lookup, earliest effective virtual deadline first tickless design, loosely based +on EEVDF (earliest eligible virtual deadline first) and my previous Staircase +Deadline scheduler, and evolved from the single runqueue O(n) BFS scheduler. +Each component shall be described in order to understand the significance of, +and reasoning for it. + + +Design reasoning. + +In BFS, the use of a single runqueue across all CPUs meant that each CPU would +need to scan the entire runqueue looking for the process with the earliest +deadline and schedule that next, regardless of which CPU it originally came +from. This made BFS deterministic with respect to latency and provided +guaranteed latencies dependent on number of processes and CPUs. The single +runqueue, however, meant that all CPUs would compete for the single lock +protecting it, which would lead to increasing lock contention as the number of +CPUs rose and appeared to limit scalability of common workloads beyond 16 +logical CPUs. Additionally, the O(n) lookup of the runqueue list obviously +increased overhead proportionate to the number of queued proecesses and led to +cache thrashing while iterating over the linked list. + +MuQSS is an evolution of BFS, designed to maintain the same scheduling +decision mechanism and be virtually deterministic without relying on the +constrained design of the single runqueue by splitting out the single runqueue +to be per-CPU and use skiplists instead of linked lists. + +The original reason for going back to a single runqueue design for BFS was that +once multiple runqueues are introduced, per-CPU or otherwise, there will be +complex interactions as each runqueue will be responsible for the scheduling +latency and fairness of the tasks only on its own runqueue, and to achieve +fairness and low latency across multiple CPUs, any advantage in throughput of +having CPU local tasks causes other disadvantages. This is due to requiring a +very complex balancing system to at best achieve some semblance of fairness +across CPUs and can only maintain relatively low latency for tasks bound to the +same CPUs, not across them. To increase said fairness and latency across CPUs, +the advantage of local runqueue locking, which makes for better scalability, is +lost due to having to grab multiple locks. + +MuQSS works around the problems inherent in multiple runqueue designs by +making its skip lists priority ordered and through novel use of lockless +examination of each other runqueue it can decide if it should take the earliest +deadline task from another runqueue for latency reasons, or for CPU balancing +reasons. It still does not have a balancing system, choosing to allow the +next task scheduling decision and task wakeup CPU choice to allow balancing to +happen by virtue of its choices. + + +Design details. + +Custom skip list implementation: + +To avoid the overhead of building up and tearing down skip list structures, +the variant used by MuQSS has a number of optimisations making it specific for +its use case in the scheduler. It uses static arrays of 8 'levels' instead of +building up and tearing down structures dynamically. This makes each runqueue +only scale O(log N) up to 64k tasks. However as there is one runqueue per CPU +it means that it scales O(log N) up to 64k x number of logical CPUs which is +far beyond the realistic task limits each CPU could handle. By being 8 levels +it also makes the array exactly one cacheline in size. Additionally, each +skip list node is bidirectional making insertion and removal amortised O(1), +being O(k) where k is 1-8. Uniquely, we are only ever interested in the very +first entry in each list at all times with MuQSS, so there is never a need to +do a search and thus look up is always O(1). In interactive mode, the queues +will be searched beyond their first entry if the first task is not suitable +for affinity or SMT nice reasons. + +Task insertion: + +MuQSS inserts tasks into a per CPU runqueue as an O(log N) insertion into +a custom skip list as described above (based on the original design by William +Pugh). Insertion is ordered in such a way that there is never a need to do a +search by ordering tasks according to static priority primarily, and then +virtual deadline at the time of insertion. + +Niffies: + +Niffies are a monotonic forward moving timer not unlike the "jiffies" but are +of nanosecond resolution. Niffies are calculated per-runqueue from the high +resolution TSC timers, and in order to maintain fairness are synchronised +between CPUs whenever both runqueues are locked concurrently. + +Virtual deadline: + +The key to achieving low latency, scheduling fairness, and "nice level" +distribution in MuQSS is entirely in the virtual deadline mechanism. The one +tunable in MuQSS is the rr_interval, or "round robin interval". This is the +maximum time two SCHED_OTHER (or SCHED_NORMAL, the common scheduling policy) +tasks of the same nice level will be running for, or looking at it the other +way around, the longest duration two tasks of the same nice level will be +delayed for. When a task requests cpu time, it is given a quota (time_slice) +equal to the rr_interval and a virtual deadline. The virtual deadline is +offset from the current time in niffies by this equation: + + niffies + (prio_ratio * rr_interval) + +The prio_ratio is determined as a ratio compared to the baseline of nice -20 +and increases by 10% per nice level. The deadline is a virtual one only in that +no guarantee is placed that a task will actually be scheduled by this time, but +it is used to compare which task should go next. There are three components to +how a task is next chosen. First is time_slice expiration. If a task runs out +of its time_slice, it is descheduled, the time_slice is refilled, and the +deadline reset to that formula above. Second is sleep, where a task no longer +is requesting CPU for whatever reason. The time_slice and deadline are _not_ +adjusted in this case and are just carried over for when the task is next +scheduled. Third is preemption, and that is when a newly waking task is deemed +higher priority than a currently running task on any cpu by virtue of the fact +that it has an earlier virtual deadline than the currently running task. The +earlier deadline is the key to which task is next chosen for the first and +second cases. + +The CPU proportion of different nice tasks works out to be approximately the + + (prio_ratio difference)^2 + +The reason it is squared is that a task's deadline does not change while it is +running unless it runs out of time_slice. Thus, even if the time actually +passes the deadline of another task that is queued, it will not get CPU time +unless the current running task deschedules, and the time "base" (niffies) is +constantly moving. + +Task lookup: + +As tasks are already pre-ordered according to anticipated scheduling order in +the skip lists, lookup for the next suitable task per-runqueue is always a +matter of simply selecting the first task in the 0th level skip list entry. +In order to maintain optimal latency and fairness across CPUs, MuQSS does a +novel examination of every other runqueue in cache locality order, choosing the +best task across all runqueues. This provides near-determinism of how long any +task across the entire system may wait before receiving CPU time. The other +runqueues are first examine lockless and then trylocked to minimise the +potential lock contention if they are likely to have a suitable better task. +Each other runqueue lock is only held for as long as it takes to examine the +entry for suitability. In "interactive" mode, the default setting, MuQSS will +look for the best deadline task across all CPUs, while in !interactive mode, +it will only select a better deadline task from another CPU if it is more +heavily laden than the current one. + +Lookup is therefore O(k) where k is number of CPUs. + + +Latency. + +Through the use of virtual deadlines to govern the scheduling order of normal +tasks, queue-to-activation latency per runqueue is guaranteed to be bound by +the rr_interval tunable which is set to 6ms by default. This means that the +longest a CPU bound task will wait for more CPU is proportional to the number +of running tasks and in the common case of 0-2 running tasks per CPU, will be +under the 7ms threshold for human perception of jitter. Additionally, as newly +woken tasks will have an early deadline from their previous runtime, the very +tasks that are usually latency sensitive will have the shortest interval for +activation, usually preempting any existing CPU bound tasks. + +Tickless expiry: + +A feature of MuQSS is that it is not tied to the resolution of the chosen tick +rate in Hz, instead depending entirely on the high resolution timers where +possible for sub-millisecond accuracy on timeouts regarless of the underlying +tick rate. This allows MuQSS to be run with the low overhead of low Hz rates +such as 100 by default, benefiting from the improved throughput and lower +power usage it provides. Another advantage of this approach is that in +combination with the Full No HZ option, which disables ticks on running task +CPUs instead of just idle CPUs, the tick can be disabled at all times +regardless of how many tasks are running instead of being limited to just one +running task. Note that this option is NOT recommended for regular desktop +users. + + +Scalability and balancing. + +Unlike traditional approaches where balancing is a combination of CPU selection +at task wakeup and intermittent balancing based on a vast array of rules set +according to architecture, busyness calculations and special case management, +MuQSS indirectly balances on the fly at task wakeup and next task selection. +During initialisation, MuQSS creates a cache coherency ordered list of CPUs for +each logical CPU and uses this to aid task/CPU selection when CPUs are busy. +Additionally it selects any idle CPUs, if they are available, at any time over +busy CPUs according to the following preference: + + * Same thread, idle or busy cache, idle or busy threads + * Other core, same cache, idle or busy cache, idle threads. + * Same node, other CPU, idle cache, idle threads. + * Same node, other CPU, busy cache, idle threads. + * Other core, same cache, busy threads. + * Same node, other CPU, busy threads. + * Other node, other CPU, idle cache, idle threads. + * Other node, other CPU, busy cache, idle threads. + * Other node, other CPU, busy threads. + +Mux is therefore SMT, MC and Numa aware without the need for extra +intermittent balancing to maintain CPUs busy and make the most of cache +coherency. + + +Features + +As the initial prime target audience for MuQSS was the average desktop user, it +was designed to not need tweaking, tuning or have features set to obtain benefit +from it. Thus the number of knobs and features has been kept to an absolute +minimum and should not require extra user input for the vast majority of cases. +There are 3 optional tunables, and 2 extra scheduling policies. The rr_interval, +interactive, and iso_cpu tunables, and the SCHED_ISO and SCHED_IDLEPRIO +policies. In addition to this, MuQSS also uses sub-tick accounting. What MuQSS +does _not_ now feature is support for CGROUPS. The average user should neither +need to know what these are, nor should they need to be using them to have good +desktop behaviour. However since some applications refuse to work without +cgroups, one can enable them with MuQSS as a stub and the filesystem will be +created which will allow the applications to work. + +rr_interval: + + /proc/sys/kernel/rr_interval + +The value is in milliseconds, and the default value is set to 6. Valid values +are from 1 to 1000 Decreasing the value will decrease latencies at the cost of +decreasing throughput, while increasing it will improve throughput, but at the +cost of worsening latencies. It is based on the fact that humans can detect +jitter at approximately 7ms, so aiming for much lower latencies is pointless +under most circumstances. It is worth noting this fact when comparing the +latency performance of MuQSS to other schedulers. Worst case latencies being +higher than 7ms are far worse than average latencies not being in the +microsecond range. + +interactive: + + /proc/sys/kernel/interactive + +The value is a simple boolean of 1 for on and 0 for off and is set to on by +default. Disabling this will disable the near-determinism of MuQSS when +selecting the next task by not examining all CPUs for the earliest deadline +task, or which CPU to wake to, instead prioritising CPU balancing for improved +throughput. Latency will still be bound by rr_interval, but on a per-CPU basis +instead of across the whole system. + +Isochronous scheduling: + +Isochronous scheduling is a unique scheduling policy designed to provide +near-real-time performance to unprivileged (ie non-root) users without the +ability to starve the machine indefinitely. Isochronous tasks (which means +"same time") are set using, for example, the schedtool application like so: + + schedtool -I -e amarok + +This will start the audio application "amarok" as SCHED_ISO. How SCHED_ISO works +is that it has a priority level between true realtime tasks and SCHED_NORMAL +which would allow them to preempt all normal tasks, in a SCHED_RR fashion (ie, +if multiple SCHED_ISO tasks are running, they purely round robin at rr_interval +rate). However if ISO tasks run for more than a tunable finite amount of time, +they are then demoted back to SCHED_NORMAL scheduling. This finite amount of +time is the percentage of CPU available per CPU, configurable as a percentage in +the following "resource handling" tunable (as opposed to a scheduler tunable): + +iso_cpu: + + /proc/sys/kernel/iso_cpu + +and is set to 70% by default. It is calculated over a rolling 5 second average +Because it is the total CPU available, it means that on a multi CPU machine, it +is possible to have an ISO task running as realtime scheduling indefinitely on +just one CPU, as the other CPUs will be available. Setting this to 100 is the +equivalent of giving all users SCHED_RR access and setting it to 0 removes the +ability to run any pseudo-realtime tasks. + +A feature of MuQSS is that it detects when an application tries to obtain a +realtime policy (SCHED_RR or SCHED_FIFO) and the caller does not have the +appropriate privileges to use those policies. When it detects this, it will +give the task SCHED_ISO policy instead. Thus it is transparent to the user. + + +Idleprio scheduling: + +Idleprio scheduling is a scheduling policy designed to give out CPU to a task +_only_ when the CPU would be otherwise idle. The idea behind this is to allow +ultra low priority tasks to be run in the background that have virtually no +effect on the foreground tasks. This is ideally suited to distributed computing +clients (like setiathome, folding, mprime etc) but can also be used to start a +video encode or so on without any slowdown of other tasks. To avoid this policy +from grabbing shared resources and holding them indefinitely, if it detects a +state where the task is waiting on I/O, the machine is about to suspend to ram +and so on, it will transiently schedule them as SCHED_NORMAL. Once a task has +been scheduled as IDLEPRIO, it cannot be put back to SCHED_NORMAL without +superuser privileges since it is effectively a lower scheduling policy. Tasks +can be set to start as SCHED_IDLEPRIO with the schedtool command like so: + +schedtool -D -e ./mprime + +Subtick accounting: + +It is surprisingly difficult to get accurate CPU accounting, and in many cases, +the accounting is done by simply determining what is happening at the precise +moment a timer tick fires off. This becomes increasingly inaccurate as the timer +tick frequency (HZ) is lowered. It is possible to create an application which +uses almost 100% CPU, yet by being descheduled at the right time, records zero +CPU usage. While the main problem with this is that there are possible security +implications, it is also difficult to determine how much CPU a task really does +use. Mux uses sub-tick accounting from the TSC clock to determine real CPU +usage. Thus, the amount of CPU reported as being used by MuQSS will more +accurately represent how much CPU the task itself is using (as is shown for +example by the 'time' application), so the reported values may be quite +different to other schedulers. When comparing throughput of MuQSS to other +designs, it is important to compare the actual completed work in terms of total +wall clock time taken and total work done, rather than the reported "cpu usage". + +Symmetric MultiThreading (SMT) aware nice: + +SMT, a.k.a. hyperthreading, is a very common feature on modern CPUs. While the +logical CPU count rises by adding thread units to each CPU core, allowing more +than one task to be run simultaneously on the same core, the disadvantage of it +is that the CPU power is shared between the tasks, not summating to the power +of two CPUs. The practical upshot of this is that two tasks running on +separate threads of the same core run significantly slower than if they had one +core each to run on. While smart CPU selection allows each task to have a core +to itself whenever available (as is done on MuQSS), it cannot offset the +slowdown that occurs when the cores are all loaded and only a thread is left. +Most of the time this is harmless as the CPU is effectively overloaded at this +point and the extra thread is of benefit. However when running a niced task in +the presence of an un-niced task (say nice 19 v nice 0), the nice task gets +precisely the same amount of CPU power as the unniced one. MuQSS has an +optional configuration feature known as SMT-NICE which selectively idles the +secondary niced thread for a period proportional to the nice difference, +allowing CPU distribution according to nice level to be maintained, at the +expense of a small amount of extra overhead. If this is configured in on a +machine without SMT threads, the overhead is minimal. + + +Con Kolivas Sat, 29th October 2016 diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt index 694968c..6c4a94a 100644 --- a/Documentation/sysctl/kernel.txt +++ b/Documentation/sysctl/kernel.txt @@ -39,6 +39,7 @@ show up in /proc/sys/kernel: - hung_task_timeout_secs - hung_task_warnings - kexec_load_disabled +- iso_cpu - kptr_restrict - l2cr [ PPC only ] - modprobe ==> Documentation/debugging-modules.txt @@ -73,6 +74,7 @@ show up in /proc/sys/kernel: - randomize_va_space - real-root-dev ==> Documentation/admin-guide/initrd.rst - reboot-cmd [ SPARC only ] +- rr_interval - rtsig-max - rtsig-nr - seccomp/ ==> Documentation/userspace-api/seccomp_filter.rst @@ -94,6 +96,7 @@ show up in /proc/sys/kernel: - unknown_nmi_panic - watchdog - watchdog_thresh +- yield_type - version ============================================================== @@ -396,6 +399,16 @@ When kptr_restrict is set to (2), kernel pointers printed using ============================================================== +iso_cpu: (MuQSS CPU scheduler only). + +This sets the percentage cpu that the unprivileged SCHED_ISO tasks can +run effectively at realtime priority, averaged over a rolling five +seconds over the -whole- system, meaning all cpus. + +Set to 70 (percent) by default. + +============================================================== + l2cr: (PPC only) This flag controls the L2 cache of G3 processor boards. If @@ -822,6 +835,20 @@ rebooting. ??? ============================================================== +rr_interval: (MuQSS CPU scheduler only) + +This is the smallest duration that any cpu process scheduling unit +will run for. Increasing this value can increase throughput of cpu +bound tasks substantially but at the expense of increased latencies +overall. Conversely decreasing it will decrease average and maximum +latencies but at the expense of throughput. This value is in +milliseconds and the default value chosen depends on the number of +cpus available at scheduler initialisation with a minimum of 6. + +Valid values are from 1-1000. + +============================================================== + rtsig-max & rtsig-nr: The file rtsig-max can be used to tune the maximum number @@ -1060,3 +1087,13 @@ The softlockup threshold is (2 * watchdog_thresh). Setting this tunable to zero will disable lockup detection altogether. ============================================================== + +yield_type: (MuQSS CPU scheduler only) + +This determines what type of yield calls to sched_yield will perform. + + 0: No yield. + 1: Yield only to better priority/deadline tasks. (default) + 2: Expire timeslice and recalculate deadline. + +============================================================== diff --git b/Documentation/tp_smapi.txt b/Documentation/tp_smapi.txt new file mode 100644 index 0000000..a249678 --- /dev/null +++ b/Documentation/tp_smapi.txt @@ -0,0 +1,275 @@ +tp_smapi version 0.42 +IBM ThinkPad hardware functions driver + +Author: Shem Multinymous +Project: http://sourceforge.net/projects/tpctl +Wiki: http://thinkwiki.org/wiki/tp_smapi +List: linux-thinkpad@linux-thinkpad.org + (http://mailman.linux-thinkpad.org/mailman/listinfo/linux-thinkpad) + +Description +----------- + +ThinkPad laptops include a proprietary interface called SMAPI BIOS +(System Management Application Program Interface) which provides some +hardware control functionality that is not accessible by other means. + +This driver exposes some features of the SMAPI BIOS through a sysfs +interface. It is suitable for newer models, on which SMAPI is invoked +through IO port writes. Older models use a different SMAPI interface; +for those, try the "thinkpad" module from the "tpctl" package. + +WARNING: +This driver uses undocumented features and direct hardware access. +It thus cannot be guaranteed to work, and may cause arbitrary damage +(especially on models it wasn't tested on). + + +Module parameters +----------------- + +thinkpad_ec module: + force_io=1 lets thinkpad_ec load on some recent ThinkPad models + (e.g., T400 and T500) whose BIOS's ACPI DSDT reserves the ports we need. +tp_smapi module: + debug=1 enables verbose dmesg output. + + +Usage +----- + +Control of battery charging thresholds (in percents of current full charge +capacity): + +# echo 40 > /sys/devices/platform/smapi/BAT0/start_charge_thresh +# echo 70 > /sys/devices/platform/smapi/BAT0/stop_charge_thresh +# cat /sys/devices/platform/smapi/BAT0/*_charge_thresh + + (This is useful since Li-Ion batteries wear out much faster at very + high or low charge levels. The driver will also keeps the thresholds + across suspend-to-disk with AC disconnected; this isn't done + automatically by the hardware.) + +Inhibiting battery charging for 17 minutes (overrides thresholds): + +# echo 17 > /sys/devices/platform/smapi/BAT0/inhibit_charge_minutes +# echo 0 > /sys/devices/platform/smapi/BAT0/inhibit_charge_minutes # stop +# cat /sys/devices/platform/smapi/BAT0/inhibit_charge_minutes + + (This can be used to control which battery is charged when using an + Ultrabay battery.) + +Forcing battery discharging even if AC power available: + +# echo 1 > /sys/devices/platform/smapi/BAT0/force_discharge # start discharge +# echo 0 > /sys/devices/platform/smapi/BAT0/force_discharge # stop discharge +# cat /sys/devices/platform/smapi/BAT0/force_discharge + + (When AC is connected, forced discharging will automatically stop + when battery is fully depleted -- this is useful for calibration. + Also, this attribute can be used to control which battery is discharged + when both a system battery and an Ultrabay battery are connected.) + +Misc read-only battery status attributes (see note about HDAPS below): + +/sys/devices/platform/smapi/BAT0/installed # 0 or 1 +/sys/devices/platform/smapi/BAT0/state # idle/charging/discharging +/sys/devices/platform/smapi/BAT0/cycle_count # integer counter +/sys/devices/platform/smapi/BAT0/current_now # instantaneous current +/sys/devices/platform/smapi/BAT0/current_avg # last minute average +/sys/devices/platform/smapi/BAT0/power_now # instantaneous power +/sys/devices/platform/smapi/BAT0/power_avg # last minute average +/sys/devices/platform/smapi/BAT0/last_full_capacity # in mWh +/sys/devices/platform/smapi/BAT0/remaining_percent # remaining percent of energy (set by calibration) +/sys/devices/platform/smapi/BAT0/remaining_percent_error # error range of remaing_percent (not reset by calibration) +/sys/devices/platform/smapi/BAT0/remaining_running_time # in minutes, by last minute average power +/sys/devices/platform/smapi/BAT0/remaining_running_time_now # in minutes, by instantenous power +/sys/devices/platform/smapi/BAT0/remaining_charging_time # in minutes +/sys/devices/platform/smapi/BAT0/remaining_capacity # in mWh +/sys/devices/platform/smapi/BAT0/design_capacity # in mWh +/sys/devices/platform/smapi/BAT0/voltage # in mV +/sys/devices/platform/smapi/BAT0/design_voltage # in mV +/sys/devices/platform/smapi/BAT0/charging_max_current # max charging current +/sys/devices/platform/smapi/BAT0/charging_max_voltage # max charging voltage +/sys/devices/platform/smapi/BAT0/group{0,1,2,3}_voltage # see below +/sys/devices/platform/smapi/BAT0/manufacturer # string +/sys/devices/platform/smapi/BAT0/model # string +/sys/devices/platform/smapi/BAT0/barcoding # string +/sys/devices/platform/smapi/BAT0/chemistry # string +/sys/devices/platform/smapi/BAT0/serial # integer +/sys/devices/platform/smapi/BAT0/manufacture_date # YYYY-MM-DD +/sys/devices/platform/smapi/BAT0/first_use_date # YYYY-MM-DD +/sys/devices/platform/smapi/BAT0/temperature # in milli-Celsius +/sys/devices/platform/smapi/BAT0/dump # see below +/sys/devices/platform/smapi/ac_connected # 0 or 1 + +The BAT0/group{0,1,2,3}_voltage attribute refers to the separate cell groups +in each battery. For example, on the ThinkPad 600, X3x, T4x and R5x models, +the battery contains 3 cell groups in series, where each group consisting of 2 +or 3 cells connected in parallel. The voltage of each group is given by these +attributes, and their sum (roughly) equals the "voltage" attribute. +(The effective performance of the battery is determined by the weakest group, +i.e., the one those voltage changes most rapidly during dis/charging.) + +The "BAT0/dump" attribute gives a a hex dump of the raw status data, which +contains additional data now in the above (if you can figure it out). Some +unused values are autodetected and replaced by "--": + +In all of the above, replace BAT0 with BAT1 to address the 2nd battery (e.g. +in the UltraBay). + + +Raw SMAPI calls: + +/sys/devices/platform/smapi/smapi_request +This performs raw SMAPI calls. It uses a bad interface that cannot handle +multiple simultaneous access. Don't touch it, it's for development only. +If you did touch it, you would so something like +# echo '211a 100 0 0' > /sys/devices/platform/smapi/smapi_request +# cat /sys/devices/platform/smapi/smapi_request +and notice that in the output "211a 34b b2 0 0 0 'OK'", the "4b" in the 2nd +value, converted to decimal is 75: the current charge stop threshold. + + +Model-specific status +--------------------- + +Works (at least partially) on the following ThinkPad model: +* A30 +* G41 +* R40, R50p, R51, R52 +* T23, T40, T40p, T41, T41p, T42, T42p, T43, T43p, T60, T61, T400, T410, T420 (partially) +* X24, X31, X32, X40, X41, X60, X61, X200, X201, X220 (partially) +* Z60t, Z61m + +Does not work on: +* X230 and newer +* T430 and newer +* Any ThinkPad Edge +* Any ThinkPad Yoga +* Any ThinkPad L series +* Any ThinkPad P series + +Not all functions are available on all models; for detailed status, see: + http://thinkwiki.org/wiki/tp_smapi + +Please report success/failure by e-mail or on the Wiki. +If you get a "not implemented" or "not supported" message, your laptop +probably just can't do that (at least not via the SMAPI BIOS). +For negative reports, follow the bug reporting guidelines below. +If you send me the necessary technical data (i.e., SMAPI function +interfaces), I will support additional models. + + +Additional HDAPS features +------------------------- + +The modified hdaps driver has several improvements on the one in mainline +(beyond resolving the conflict with thinkpad_ec and tp_smapi): + +- Fixes reliability and improves support for recent ThinkPad models + (especially *60 and newer). Unlike the mainline driver, the modified hdaps + correctly follows the Embedded Controller communication protocol. + +- Extends the "invert" parameter to cover all possible axis orientations. + The possible values are as follows. + Let X,Y denote the hardware readouts. + Let R denote the laptop's roll (tilt left/right). + Let P denote the laptop's pitch (tilt forward/backward). + invert=0: R= X P= Y (same as mainline) + invert=1: R=-X P=-Y (same as mainline) + invert=2: R=-X P= Y (new) + invert=3: R= X P=-Y (new) + invert=4: R= Y P= X (new) + invert=5: R=-Y P=-X (new) + invert=6: R=-Y P= X (new) + invert=7: R= Y P=-X (new) + It's probably easiest to just try all 8 possibilities and see which yields + correct results (e.g., in the hdaps-gl visualisation). + +- Adds a whitelist which automatically sets the correct axis orientation for + some models. If the value for your model is wrong or missing, you can override + it using the "invert" parameter. Please also update the tables at + http://www.thinkwiki.org/wiki/tp_smapi and + http://www.thinkwiki.org/wiki/List_of_DMI_IDs + and submit a patch for the whitelist in hdaps.c. + +- Provides new attributes: + /sys/devices/platform/hdaps/sampling_rate: + This determines the frequency at which the host queries the embedded + controller for accelerometer data (and informs the hdaps input devices). + Default=50. + /sys/devices/platform/hdaps/oversampling_ratio: + When set to X, the embedded controller is told to do physical accelerometer + measurements at a rate that is X times higher than the rate at which + the driver reads those measurements (i.e., X*sampling_rate). This + makes the readouts from the embedded controller more fresh, and is also + useful for the running average filter (see next). Default=5 + /sys/devices/platform/hdaps/running_avg_filter_order: + When set to X, reported readouts will be the average of the last X physical + accelerometer measurements. Current firmware allows 1<=X<=8. Setting to a + high value decreases readout fluctuations. The averaging is handled by the + embedded controller, so no CPU resources are used. Higher values make the + readouts smoother, since it averages out both sensor noise (good) and abrupt + changes (bad). Default=2. + +- Provides a second input device, which publishes the raw accelerometer + measurements (without the fuzzing needed for joystick emulation). This input + device can be matched by a udev rule such as the following (all on one line): + KERNEL=="event[0-9]*", ATTRS{phys}=="hdaps/input1", + ATTRS{modalias}=="input:b0019v1014p5054e4801-*", + SYMLINK+="input/hdaps/accelerometer-event + +A new version of the hdapsd userspace daemon, which uses the input device +interface instead of polling sysfs, is available seprately. Using this reduces +the total interrupts per second generated by hdaps+hdapsd (on tickless kernels) +to 50, down from a value that fluctuates between 50 and 100. Set the +sampling_rate sysfs attribute to a lower value to further reduce interrupts, +at the expense of response latency. + +Licensing note: all my changes to the HDAPS driver are licensed under the +GPL version 2 or, at your option and to the extent allowed by derivation from +prior works, any later version. My version of hdaps is derived work from the +mainline version, which at the time of writing is available only under +GPL version 2. + +Bug reporting +------------- + +Mail . Please include: +* Details about your model, +* Relevant "dmesg" output. Make sure thinkpad_ec and tp_smapi are loaded with + the "debug=1" parameter (e.g., use "make load HDAPS=1 DEBUG=1"). +* Output of "dmidecode | grep -C5 Product" +* Does the failed functionality works under Windows? + + +More about SMAPI +---------------- + +For hints about what may be possible via the SMAPI BIOS and how, see: + +* IBM Technical Reference Manual for the ThinkPad 770 + (http://www-307.ibm.com/pc/support/site.wss/document.do?lndocid=PFAN-3TUQQD) +* Exported symbols in PWRMGRIF.DLL or TPPWRW32.DLL (e.g., use "objdump -x"). +* drivers/char/mwave/smapi.c in the Linux kernel tree.* +* The "thinkpad" SMAPI module (http://tpctl.sourceforge.net). +* The SMAPI_* constants in tp_smapi.c. + +Note that in the above Technical Reference and in the "thinkpad" module, +SMAPI is invoked through a function call to some physical address. However, +the interface used by tp_smapi and the above mwave drive, and apparently +required by newer ThinkPad, is different: you set the parameters up in the +CPU's registers and write to ports 0xB2 (the APM control port) and 0x4F; this +triggers an SMI (System Management Interrupt), causing the CPU to enter +SMM (System Management Mode) and run the BIOS firmware; the results are +returned in the CPU's registers. It is not clear what is the relation between +the two variants of SMAPI, though the assignment of error codes seems to be +similar. + +In addition, the embedded controller on ThinkPad laptops has a non-standard +interface at IO ports 0x1600-0x161F (mapped to LCP channel 3 of the H8S chip). +The interface provides various system management services (currently known: +battery information and accelerometer readouts). For more information see the +thinkpad_ec module and the H8S hardware documentation: +http://documentation.renesas.com/eng/products/mpumcu/rej09b0300_2140bhm.pdf diff --git a/Documentation/vm/00-INDEX b/Documentation/vm/00-INDEX index 11d3d8d..5269998 100644 --- a/Documentation/vm/00-INDEX +++ b/Documentation/vm/00-INDEX @@ -20,6 +20,8 @@ idle_page_tracking.txt - description of the idle page tracking feature. ksm.txt - how to use the Kernel Samepage Merging feature. +uksm.txt + - Introduction to Ultra KSM numa - information about NUMA specific code in the Linux vm. numa_memory_policy.txt diff --git b/Documentation/vm/uksm.txt b/Documentation/vm/uksm.txt new file mode 100644 index 0000000..be19a31 --- /dev/null +++ b/Documentation/vm/uksm.txt @@ -0,0 +1,61 @@ +The Ultra Kernel Samepage Merging feature +---------------------------------------------- +/* + * Ultra KSM. Copyright (C) 2011-2012 Nai Xia + * + * This is an improvement upon KSM. Some basic data structures and routines + * are borrowed from ksm.c . + * + * Its new features: + * 1. Full system scan: + * It automatically scans all user processes' anonymous VMAs. Kernel-user + * interaction to submit a memory area to KSM is no longer needed. + * + * 2. Rich area detection: + * It automatically detects rich areas containing abundant duplicated + * pages based. Rich areas are given a full scan speed. Poor areas are + * sampled at a reasonable speed with very low CPU consumption. + * + * 3. Ultra Per-page scan speed improvement: + * A new hash algorithm is proposed. As a result, on a machine with + * Core(TM)2 Quad Q9300 CPU in 32-bit mode and 800MHZ DDR2 main memory, it + * can scan memory areas that does not contain duplicated pages at speed of + * 627MB/sec ~ 2445MB/sec and can merge duplicated areas at speed of + * 477MB/sec ~ 923MB/sec. + * + * 4. Thrashing area avoidance: + * Thrashing area(an VMA that has frequent Ksm page break-out) can be + * filtered out. My benchmark shows it's more efficient than KSM's per-page + * hash value based volatile page detection. + * + * + * 5. Misc changes upon KSM: + * * It has a fully x86-opitmized memcmp dedicated for 4-byte-aligned page + * comparison. It's much faster than default C version on x86. + * * rmap_item now has an struct *page member to loosely cache a + * address-->page mapping, which reduces too much time-costly + * follow_page(). + * * The VMA creation/exit procedures are hooked to let the Ultra KSM know. + * * try_to_merge_two_pages() now can revert a pte if it fails. No break_ + * ksm is needed for this case. + * + * 6. Full Zero Page consideration(contributed by Figo Zhang) + * Now uksmd consider full zero pages as special pages and merge them to an + * special unswappable uksm zero page. + */ + +ChangeLog: + +2012-05-05 The creation of this Doc +2012-05-08 UKSM 0.1.1.1 libc crash bug fix, api clean up, doc clean up. +2012-05-28 UKSM 0.1.1.2 bug fix release +2012-06-26 UKSM 0.1.2-beta1 first beta release for 0.1.2 +2012-07-2 UKSM 0.1.2-beta2 +2012-07-10 UKSM 0.1.2-beta3 +2012-07-26 UKSM 0.1.2 Fine grained speed control, more scan optimization. +2012-10-13 UKSM 0.1.2.1 Bug fixes. +2012-12-31 UKSM 0.1.2.2 Minor bug fixes. +2014-07-02 UKSM 0.1.2.3 Fix a " __this_cpu_read() in preemptible bug". +2015-04-22 UKSM 0.1.2.4 Fix a race condition that can sometimes trigger anonying warnings. +2016-09-10 UKSM 0.1.2.5 Fix a bug in dedup ratio calculation. +2017-02-26 UKSM 0.1.2.6 Fix a bug in hugetlbpage handling and a race bug with page migration. diff --git a/MAINTAINERS b/MAINTAINERS index 2811a21..44d41ad 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -2465,6 +2465,19 @@ F: include/linux/audit.h F: include/uapi/linux/audit.h F: kernel/audit* +AUFS (advanced multi layered unification filesystem) FILESYSTEM +M: "J. R. Okajima" +L: linux-unionfs@vger.kernel.org +L: aufs-users@lists.sourceforge.net (members only) +W: http://aufs.sourceforge.net +T: git://github.com/sfjro/aufs4-linux.git +S: Supported +F: Documentation/filesystems/aufs/ +F: Documentation/ABI/testing/debugfs-aufs +F: Documentation/ABI/testing/sysfs-aufs +F: fs/aufs/ +F: include/uapi/linux/aufs_type.h + AUXILIARY DISPLAY DRIVERS M: Miguel Ojeda Sandonis W: http://miguelojeda.es/auxdisplay.htm @@ -13700,6 +13713,13 @@ S: Maintained F: drivers/tc/ F: include/linux/tc.h +TUXONICE (ENHANCED HIBERNATION) +P: Nigel Cunningham +M: nigel@nigelcunningham.com.au +L: tuxonice-devel@lists.nigelcunningham.com.au +W: http://tuxonice.net +S: Maintained + TW5864 VIDEO4LINUX DRIVER M: Bluecherry Maintainers M: Anton Sviridenko diff --git a/Makefile b/Makefile index 9b176d5..66a2c8c 100644 --- a/Makefile +++ b/Makefile @@ -640,12 +640,16 @@ ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE KBUILD_CFLAGS += $(call cc-option,-Oz,-Os) KBUILD_CFLAGS += $(call cc-disable-warning,maybe-uninitialized,) else +ifdef CONFIG_CC_OPTIMIZE_HARDER +KBUILD_CFLAGS += -O3 $(call cc-disable-warning,maybe-uninitialized,) +else ifdef CONFIG_PROFILE_ALL_BRANCHES KBUILD_CFLAGS += -O2 $(call cc-disable-warning,maybe-uninitialized,) else KBUILD_CFLAGS += -O2 endif endif +endif KBUILD_CFLAGS += $(call cc-ifversion, -lt, 0409, \ $(call cc-disable-warning,maybe-uninitialized,)) diff --git a/arch/powerpc/platforms/cell/spufs/sched.c b/arch/powerpc/platforms/cell/spufs/sched.c index 1fbb5da..29a929e 100644 --- a/arch/powerpc/platforms/cell/spufs/sched.c +++ b/arch/powerpc/platforms/cell/spufs/sched.c @@ -64,11 +64,6 @@ static struct task_struct *spusched_task; static struct timer_list spusched_timer; static struct timer_list spuloadavg_timer; -/* - * Priority of a normal, non-rt, non-niced'd process (aka nice level 0). - */ -#define NORMAL_PRIO 120 - /* * Frequency of the spu scheduler tick. By default we do one SPU scheduler * tick for every 10 CPU scheduler ticks. diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 9bceea6..2ca843e 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -947,10 +947,26 @@ config SCHED_SMT depends on SMP ---help--- SMT scheduler support improves the CPU scheduler's decision making - when dealing with Intel Pentium 4 chips with HyperThreading at a + when dealing with Intel P4/Core 2 chips with HyperThreading at a cost of slightly increased overhead in some places. If unsure say N here. +config SMT_NICE + bool "SMT (Hyperthreading) aware nice priority and policy support" + depends on SCHED_MUQSS && SCHED_SMT + default y + ---help--- + Enabling Hyperthreading on Intel CPUs decreases the effectiveness + of the use of 'nice' levels and different scheduling policies + (e.g. realtime) due to sharing of CPU power between hyperthreads. + SMT nice support makes each logical CPU aware of what is running on + its hyperthread siblings, maintaining appropriate distribution of + CPU according to nice levels and scheduling policies at the expense + of slightly increased overhead. + + If unsure say Y here. + + config SCHED_MC def_bool y prompt "Multi-core scheduler support" diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu index 65a9a47..90ec5b9 100644 --- a/arch/x86/Kconfig.cpu +++ b/arch/x86/Kconfig.cpu @@ -116,6 +116,7 @@ config MPENTIUMM config MPENTIUM4 bool "Pentium-4/Celeron(P4-based)/Pentium-4 M/older Xeon" depends on X86_32 + select X86_P6_NOP ---help--- Select this for Intel Pentium 4 chips. This includes the Pentium 4, Pentium D, P4-based Celeron and Xeon, and @@ -148,9 +149,8 @@ config MPENTIUM4 -Paxville -Dempsey - config MK6 - bool "K6/K6-II/K6-III" + bool "AMD K6/K6-II/K6-III" depends on X86_32 ---help--- Select this for an AMD K6-family processor. Enables use of @@ -158,7 +158,7 @@ config MK6 flags to GCC. config MK7 - bool "Athlon/Duron/K7" + bool "AMD Athlon/Duron/K7" depends on X86_32 ---help--- Select this for an AMD Athlon K7-family processor. Enables use of @@ -166,12 +166,83 @@ config MK7 flags to GCC. config MK8 - bool "Opteron/Athlon64/Hammer/K8" + bool "AMD Opteron/Athlon64/Hammer/K8" ---help--- Select this for an AMD Opteron or Athlon64 Hammer-family processor. Enables use of some extended instructions, and passes appropriate optimization flags to GCC. +config MK8SSE3 + bool "AMD Opteron/Athlon64/Hammer/K8 with SSE3" + ---help--- + Select this for improved AMD Opteron or Athlon64 Hammer-family processors. + Enables use of some extended instructions, and passes appropriate + optimization flags to GCC. + +config MK10 + bool "AMD 61xx/7x50/PhenomX3/X4/II/K10" + ---help--- + Select this for an AMD 61xx Eight-Core Magny-Cours, Athlon X2 7x50, + Phenom X3/X4/II, Athlon II X2/X3/X4, or Turion II-family processor. + Enables use of some extended instructions, and passes appropriate + optimization flags to GCC. + +config MBARCELONA + bool "AMD Barcelona" + ---help--- + Select this for AMD Family 10h Barcelona processors. + + Enables -march=barcelona + +config MBOBCAT + bool "AMD Bobcat" + ---help--- + Select this for AMD Family 14h Bobcat processors. + + Enables -march=btver1 + +config MJAGUAR + bool "AMD Jaguar" + ---help--- + Select this for AMD Family 16h Jaguar processors. + + Enables -march=btver2 + +config MBULLDOZER + bool "AMD Bulldozer" + ---help--- + Select this for AMD Family 15h Bulldozer processors. + + Enables -march=bdver1 + +config MPILEDRIVER + bool "AMD Piledriver" + ---help--- + Select this for AMD Family 15h Piledriver processors. + + Enables -march=bdver2 + +config MSTEAMROLLER + bool "AMD Steamroller" + ---help--- + Select this for AMD Family 15h Steamroller processors. + + Enables -march=bdver3 + +config MEXCAVATOR + bool "AMD Excavator" + ---help--- + Select this for AMD Family 15h Excavator processors. + + Enables -march=bdver4 + +config MPCK + bool "AMD Zen" + ---help--- + Select this for AMD Family 17h Zen processors. + + Enables -march=znver1 + config MCRUSOE bool "Crusoe" depends on X86_32 @@ -253,6 +324,7 @@ config MVIAC7 config MPSC bool "Intel P4 / older Netburst based Xeon" + select X86_P6_NOP depends on X86_64 ---help--- Optimize for Intel Pentium 4, Pentium D and older Nocona/Dempsey @@ -262,8 +334,19 @@ config MPSC using the cpu family field in /proc/cpuinfo. Family 15 is an older Xeon, Family 6 a newer one. +config MATOM + bool "Intel Atom" + select X86_P6_NOP + ---help--- + + Select this for the Intel Atom platform. Intel Atom CPUs have an + in-order pipelining architecture and thus can benefit from + accordingly optimized code. Use a recent GCC with specific Atom + support in order to fully benefit from selecting this option. + config MCORE2 - bool "Core 2/newer Xeon" + bool "Intel Core 2" + select X86_P6_NOP ---help--- Select this for Intel Core 2 and newer Core 2 Xeons (Xeon 51xx and @@ -271,14 +354,79 @@ config MCORE2 family in /proc/cpuinfo. Newer ones have 6 and older ones 15 (not a typo) -config MATOM - bool "Intel Atom" + Enables -march=core2 + +config MNEHALEM + bool "Intel Nehalem" + select X86_P6_NOP ---help--- - Select this for the Intel Atom platform. Intel Atom CPUs have an - in-order pipelining architecture and thus can benefit from - accordingly optimized code. Use a recent GCC with specific Atom - support in order to fully benefit from selecting this option. + Select this for 1st Gen Core processors in the Nehalem family. + + Enables -march=nehalem + +config MWESTMERE + bool "Intel Westmere" + select X86_P6_NOP + ---help--- + + Select this for the Intel Westmere formerly Nehalem-C family. + + Enables -march=westmere + +config MSILVERMONT + bool "Intel Silvermont" + select X86_P6_NOP + ---help--- + + Select this for the Intel Silvermont platform. + + Enables -march=silvermont + +config MSANDYBRIDGE + bool "Intel Sandy Bridge" + select X86_P6_NOP + ---help--- + + Select this for 2nd Gen Core processors in the Sandy Bridge family. + + Enables -march=sandybridge + +config MIVYBRIDGE + bool "Intel Ivy Bridge" + select X86_P6_NOP + ---help--- + + Select this for 3rd Gen Core processors in the Ivy Bridge family. + + Enables -march=ivybridge + +config MHASWELL + bool "Intel Haswell" + select X86_P6_NOP + ---help--- + + Select this for 4th Gen Core processors in the Haswell family. + + Enables -march=haswell + +config MBROADWELL + bool "Intel Broadwell" + select X86_P6_NOP + ---help--- + + Select this for 5th Gen Core processors in the Broadwell family. + + Enables -march=broadwell + +config MSKYLAKE + bool "Intel Skylake" + select X86_P6_NOP + ---help--- + + Select this for 6th Gen Core processors in the Skylake family. + + Enables -march=skylake config GENERIC_CPU bool "Generic-x86-64" @@ -287,6 +435,19 @@ config GENERIC_CPU Generic x86-64 CPU. Run equally well on all x86-64 CPUs. +config MNATIVE + bool "Native optimizations autodetected by GCC" + ---help--- + + GCC 4.2 and above support -march=native, which automatically detects + the optimum settings to use based on your processor. -march=native + also detects and applies additional settings beyond -march specific + to your CPU, (eg. -msse4). Unless you have a specific reason not to + (e.g. distcc cross-compiling), you should probably be using + -march=native rather than anything listed below. + + Enables -march=native + endchoice config X86_GENERIC @@ -311,7 +472,7 @@ config X86_INTERNODE_CACHE_SHIFT config X86_L1_CACHE_SHIFT int default "7" if MPENTIUM4 || MPSC - default "6" if MK7 || MK8 || MPENTIUMM || MCORE2 || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU + default "6" if MK7 || MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MPCK || MJAGUAR || MPENTIUMM || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM || MVIAC7 || X86_GENERIC || GENERIC_CPU default "4" if MELAN || M486 || MGEODEGX1 default "5" if MWINCHIP3D || MWINCHIPC6 || MCRUSOE || MEFFICEON || MCYRIXIII || MK6 || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || M586 || MVIAC3_2 || MGEODE_LX @@ -342,45 +503,46 @@ config X86_ALIGNMENT_16 config X86_INTEL_USERCOPY def_bool y - depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK7 || MEFFICEON || MCORE2 + depends on MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M586MMX || X86_GENERIC || MK8 || MK8SSE3 || MK7 || MEFFICEON || MCORE2 || MK10 || MBARCELONA || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE config X86_USE_PPRO_CHECKSUM def_bool y - depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MATOM + depends on MWINCHIP3D || MWINCHIPC6 || MCYRIXIII || MK7 || MK6 || MK10 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MEFFICEON || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MATOM || MNATIVE config X86_USE_3DNOW def_bool y depends on (MCYRIXIII || MK7 || MGEODE_LX) && !UML -# -# P6_NOPs are a relatively minor optimization that require a family >= -# 6 processor, except that it is broken on certain VIA chips. -# Furthermore, AMD chips prefer a totally different sequence of NOPs -# (which work on all CPUs). In addition, it looks like Virtual PC -# does not understand them. -# -# As a result, disallow these if we're not compiling for X86_64 (these -# NOPs do work on all x86-64 capable chips); the list of processors in -# the right-hand clause are the cores that benefit from this optimization. -# config X86_P6_NOP - def_bool y - depends on X86_64 - depends on (MCORE2 || MPENTIUM4 || MPSC) + default n + bool "Support for P6_NOPs on Intel chips" + depends on (MCORE2 || MPENTIUM4 || MPSC || MATOM || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE) + ---help--- + P6_NOPs are a relatively minor optimization that require a family >= + 6 processor, except that it is broken on certain VIA chips. + Furthermore, AMD chips prefer a totally different sequence of NOPs + (which work on all CPUs). In addition, it looks like Virtual PC + does not understand them. + + As a result, disallow these if we're not compiling for X86_64 (these + NOPs do work on all x86-64 capable chips); the list of processors in + the right-hand clause are the cores that benefit from this optimization. + + Say Y if you have Intel CPU newer than Pentium Pro, N otherwise. config X86_TSC def_bool y - depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MATOM) || X86_64 + depends on (MWINCHIP3D || MCRUSOE || MEFFICEON || MCYRIXIII || MK7 || MK6 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || M586MMX || M586TSC || MK8 || MK8SSE3 || MVIAC3_2 || MVIAC7 || MGEODEGX1 || MGEODE_LX || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MNATIVE || MATOM) || X86_64 config X86_CMPXCHG64 def_bool y - depends on X86_PAE || X86_64 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM + depends on X86_PAE || X86_64 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MATOM || MNATIVE # this should be set for all -march=.. options where the compiler # generates cmov. config X86_CMOV def_bool y - depends on (MK8 || MK7 || MCORE2 || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MATOM || MGEODE_LX) + depends on (MK8 || MK8SSE3 || MK10 || MBARCELONA || MBOBCAT || MBULLDOZER || MPILEDRIVER || MSTEAMROLLER || MEXCAVATOR || MPCK || MJAGUAR || MK7 || MCORE2 || MNEHALEM || MWESTMERE || MSILVERMONT || MSANDYBRIDGE || MIVYBRIDGE || MHASWELL || MBROADWELL || MSKYLAKE || MPENTIUM4 || MPENTIUMM || MPENTIUMIII || MPENTIUMII || M686 || MVIAC3_2 || MVIAC7 || MCRUSOE || MEFFICEON || X86_64 || MNATIVE || MATOM || MGEODE_LX) config X86_MINIMUM_CPU_FAMILY int diff --git a/arch/x86/Makefile b/arch/x86/Makefile index a20eacd..0e66b52 100644 --- a/arch/x86/Makefile +++ b/arch/x86/Makefile @@ -124,13 +124,40 @@ else KBUILD_CFLAGS += $(call cc-option,-mskip-rax-setup) # FIXME - should be integrated in Makefile.cpu (Makefile_32.cpu) + cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native) cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8) + cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-mtune=k8) + cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10) + cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona) + cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1) + cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2) + cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1) + cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2) + cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3) + cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4) + cflags-$(CONFIG_MPCK) += $(call cc-option,-march=znver1) cflags-$(CONFIG_MPSC) += $(call cc-option,-march=nocona) cflags-$(CONFIG_MCORE2) += \ - $(call cc-option,-march=core2,$(call cc-option,-mtune=generic)) - cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom) \ - $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic)) + $(call cc-option,-march=core2,$(call cc-option,-mtune=core2)) + cflags-$(CONFIG_MNEHALEM) += \ + $(call cc-option,-march=nehalem,$(call cc-option,-mtune=nehalem)) + cflags-$(CONFIG_MWESTMERE) += \ + $(call cc-option,-march=westmere,$(call cc-option,-mtune=westmere)) + cflags-$(CONFIG_MSILVERMONT) += \ + $(call cc-option,-march=silvermont,$(call cc-option,-mtune=silvermont)) + cflags-$(CONFIG_MSANDYBRIDGE) += \ + $(call cc-option,-march=sandybridge,$(call cc-option,-mtune=sandybridge)) + cflags-$(CONFIG_MIVYBRIDGE) += \ + $(call cc-option,-march=ivybridge,$(call cc-option,-mtune=ivybridge)) + cflags-$(CONFIG_MHASWELL) += \ + $(call cc-option,-march=haswell,$(call cc-option,-mtune=haswell)) + cflags-$(CONFIG_MBROADWELL) += \ + $(call cc-option,-march=broadwell,$(call cc-option,-mtune=broadwell)) + cflags-$(CONFIG_MSKYLAKE) += \ + $(call cc-option,-march=skylake,$(call cc-option,-mtune=skylake)) + cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell) \ + $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic)) cflags-$(CONFIG_GENERIC_CPU) += $(call cc-option,-mtune=generic) KBUILD_CFLAGS += $(cflags-y) diff --git a/arch/x86/Makefile_32.cpu b/arch/x86/Makefile_32.cpu index 1f5faf8..92cf897 100644 --- a/arch/x86/Makefile_32.cpu +++ b/arch/x86/Makefile_32.cpu @@ -23,7 +23,18 @@ cflags-$(CONFIG_MK6) += -march=k6 # Please note, that patches that add -march=athlon-xp and friends are pointless. # They make zero difference whatsosever to performance at this time. cflags-$(CONFIG_MK7) += -march=athlon +cflags-$(CONFIG_MNATIVE) += $(call cc-option,-march=native) cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8,-march=athlon) +cflags-$(CONFIG_MK8SSE3) += $(call cc-option,-march=k8-sse3,-march=athlon) +cflags-$(CONFIG_MK10) += $(call cc-option,-march=amdfam10,-march=athlon) +cflags-$(CONFIG_MBARCELONA) += $(call cc-option,-march=barcelona,-march=athlon) +cflags-$(CONFIG_MBOBCAT) += $(call cc-option,-march=btver1,-march=athlon) +cflags-$(CONFIG_MJAGUAR) += $(call cc-option,-march=btver2,-march=athlon) +cflags-$(CONFIG_MBULLDOZER) += $(call cc-option,-march=bdver1,-march=athlon) +cflags-$(CONFIG_MPILEDRIVER) += $(call cc-option,-march=bdver2,-march=athlon) +cflags-$(CONFIG_MSTEAMROLLER) += $(call cc-option,-march=bdver3,-march=athlon) +cflags-$(CONFIG_MEXCAVATOR) += $(call cc-option,-march=bdver4,-march=athlon) +cflags-$(CONFIG_MPCK) += $(call cc-option,-march=znver1,-march=athlon) cflags-$(CONFIG_MCRUSOE) += -march=i686 -falign-functions=0 -falign-jumps=0 -falign-loops=0 cflags-$(CONFIG_MEFFICEON) += -march=i686 $(call tune,pentium3) -falign-functions=0 -falign-jumps=0 -falign-loops=0 cflags-$(CONFIG_MWINCHIPC6) += $(call cc-option,-march=winchip-c6,-march=i586) @@ -32,8 +43,16 @@ cflags-$(CONFIG_MCYRIXIII) += $(call cc-option,-march=c3,-march=i486) -falign-fu cflags-$(CONFIG_MVIAC3_2) += $(call cc-option,-march=c3-2,-march=i686) cflags-$(CONFIG_MVIAC7) += -march=i686 cflags-$(CONFIG_MCORE2) += -march=i686 $(call tune,core2) -cflags-$(CONFIG_MATOM) += $(call cc-option,-march=atom,$(call cc-option,-march=core2,-march=i686)) \ - $(call cc-option,-mtune=atom,$(call cc-option,-mtune=generic)) +cflags-$(CONFIG_MNEHALEM) += -march=i686 $(call tune,nehalem) +cflags-$(CONFIG_MWESTMERE) += -march=i686 $(call tune,westmere) +cflags-$(CONFIG_MSILVERMONT) += -march=i686 $(call tune,silvermont) +cflags-$(CONFIG_MSANDYBRIDGE) += -march=i686 $(call tune,sandybridge) +cflags-$(CONFIG_MIVYBRIDGE) += -march=i686 $(call tune,ivybridge) +cflags-$(CONFIG_MHASWELL) += -march=i686 $(call tune,haswell) +cflags-$(CONFIG_MBROADWELL) += -march=i686 $(call tune,broadwell) +cflags-$(CONFIG_MSKYLAKE) += -march=i686 $(call tune,skylake) +cflags-$(CONFIG_MATOM) += $(call cc-option,-march=bonnell,$(call cc-option,-march=core2,-march=i686)) \ + $(call cc-option,-mtune=bonnell,$(call cc-option,-mtune=generic)) # AMD Elan support cflags-$(CONFIG_MELAN) += -march=i486 diff --git a/arch/x86/include/asm/module.h b/arch/x86/include/asm/module.h index 8546faf..3d01189 100644 --- a/arch/x86/include/asm/module.h +++ b/arch/x86/include/asm/module.h @@ -25,6 +25,24 @@ struct mod_arch_specific { #define MODULE_PROC_FAMILY "586MMX " #elif defined CONFIG_MCORE2 #define MODULE_PROC_FAMILY "CORE2 " +#elif defined CONFIG_MNATIVE +#define MODULE_PROC_FAMILY "NATIVE " +#elif defined CONFIG_MNEHALEM +#define MODULE_PROC_FAMILY "NEHALEM " +#elif defined CONFIG_MWESTMERE +#define MODULE_PROC_FAMILY "WESTMERE " +#elif defined CONFIG_MSILVERMONT +#define MODULE_PROC_FAMILY "SILVERMONT " +#elif defined CONFIG_MSANDYBRIDGE +#define MODULE_PROC_FAMILY "SANDYBRIDGE " +#elif defined CONFIG_MIVYBRIDGE +#define MODULE_PROC_FAMILY "IVYBRIDGE " +#elif defined CONFIG_MHASWELL +#define MODULE_PROC_FAMILY "HASWELL " +#elif defined CONFIG_MBROADWELL +#define MODULE_PROC_FAMILY "BROADWELL " +#elif defined CONFIG_MSKYLAKE +#define MODULE_PROC_FAMILY "SKYLAKE " #elif defined CONFIG_MATOM #define MODULE_PROC_FAMILY "ATOM " #elif defined CONFIG_M686 @@ -43,6 +61,26 @@ struct mod_arch_specific { #define MODULE_PROC_FAMILY "K7 " #elif defined CONFIG_MK8 #define MODULE_PROC_FAMILY "K8 " +#elif defined CONFIG_MK8SSE3 +#define MODULE_PROC_FAMILY "K8SSE3 " +#elif defined CONFIG_MK10 +#define MODULE_PROC_FAMILY "K10 " +#elif defined CONFIG_MBARCELONA +#define MODULE_PROC_FAMILY "BARCELONA " +#elif defined CONFIG_MBOBCAT +#define MODULE_PROC_FAMILY "BOBCAT " +#elif defined CONFIG_MBULLDOZER +#define MODULE_PROC_FAMILY "BULLDOZER " +#elif defined CONFIG_MPILEDRIVER +#define MODULE_PROC_FAMILY "PILEDRIVER " +#elif defined CONFIG_MSTEAMROLLER +#define MODULE_PROC_FAMILY "STEAMROLLER " +#elif defined CONFIG_MJAGUAR +#define MODULE_PROC_FAMILY "JAGUAR " +#elif defined CONFIG_MEXCAVATOR +#define MODULE_PROC_FAMILY "EXCAVATOR " +#elif defined CONFIG_MPCK +#define MODULE_PROC_FAMILY "PCK " #elif defined CONFIG_MELAN #define MODULE_PROC_FAMILY "ELAN " #elif defined CONFIG_MCRUSOE diff --git a/arch/x86/kernel/espfix_64.c b/arch/x86/kernel/espfix_64.c index 9c4e7ba..3050e86 100644 --- a/arch/x86/kernel/espfix_64.c +++ b/arch/x86/kernel/espfix_64.c @@ -175,6 +175,7 @@ void init_espfix_ap(int cpu) struct page *page = alloc_pages_node(node, PGALLOC_GFP, 0); pmd_p = (pmd_t *)page_address(page); + SetPageTOI_Untracked(virt_to_page(pmd_p)); pud = __pud(__pa(pmd_p) | (PGTABLE_PROT & ptemask)); paravirt_alloc_pmd(&init_mm, __pa(pmd_p) >> PAGE_SHIFT); for (n = 0; n < ESPFIX_PUD_CLONES; n++) @@ -187,6 +188,7 @@ void init_espfix_ap(int cpu) struct page *page = alloc_pages_node(node, PGALLOC_GFP, 0); pte_p = (pte_t *)page_address(page); + SetPageTOI_Untracked(virt_to_page(pte_p)); pmd = __pmd(__pa(pte_p) | (PGTABLE_PROT & ptemask)); paravirt_alloc_pte(&init_mm, __pa(pte_p) >> PAGE_SHIFT); for (n = 0; n < ESPFIX_PMD_CLONES; n++) @@ -195,6 +197,7 @@ void init_espfix_ap(int cpu) pte_p = pte_offset_kernel(&pmd, addr); stack_page = page_address(alloc_pages_node(node, GFP_KERNEL, 0)); + SetPageTOI_Untracked(virt_to_page(stack_page)); pte = __pte(__pa(stack_page) | ((__PAGE_KERNEL_RO | _PAGE_ENC) & ptemask)); for (n = 0; n < ESPFIX_PTE_CLONES; n++) set_pte(&pte_p[n*PTE_STRIDE], pte); diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c index ad2b925..a2529ae 100644 --- a/arch/x86/kernel/tsc.c +++ b/arch/x86/kernel/tsc.c @@ -13,6 +13,7 @@ #include #include #include +#include #include #include @@ -120,6 +121,10 @@ static void cyc2ns_init(int cpu) cyc2ns_data_init(&c2n->data[1]); seqcount_init(&c2n->seq); + + // Don't let TuxOnIce make data RO - a secondary CPU will cause a triple fault + // if it loads microcode, which then does a printk, which may end up invoking cycles_2_ns + SetPageTOI_Untracked(virt_to_page(c2n)); } static inline unsigned long long cycles_2_ns(unsigned long long cyc) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index b0ff378..8c6ef17 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -15,6 +15,7 @@ #include /* hstate_index_to_shift */ #include /* prefetchw */ #include /* exception_enter(), ... */ +#include /* incremental image support */ #include /* faulthandler_disabled() */ #include /* boot_cpu_has, ... */ @@ -762,6 +763,10 @@ no_context(struct pt_regs *regs, unsigned long error_code, unsigned long flags; int sig; + if (toi_make_writable(init_mm.pgd, address)) { + return; + } + /* Are we prepared to handle this kernel fault? */ if (fixup_exception(regs, X86_TRAP_PF)) { /* @@ -1083,10 +1088,101 @@ mm_fault_error(struct pt_regs *regs, unsigned long error_code, } } +#ifdef CONFIG_TOI_INCREMENTAL +/** + * _toi_do_cbw - Do a copy-before-write before letting the faulting process continue + */ +static void toi_do_cbw(struct page *page) +{ + struct toi_cbw_state *state = this_cpu_ptr(&toi_cbw_states); + + state->active = 1; + wmb(); + + if (state->enabled && state->next && PageTOI_CBW(page)) { + struct toi_cbw *this = state->next; + memcpy(this->virt, page_address(page), PAGE_SIZE); + this->pfn = page_to_pfn(page); + state->next = this->next; + } + + state->active = 0; +} + +/** + * _toi_make_writable - Defuse TOI's write protection + */ +int _toi_make_writable(pte_t *pte) +{ + struct page *page = pte_page(*pte); + if (PageTOI_RO(page)) { + pgd_t *pgd = __va(read_cr3()); + /* + * If this is a TuxOnIce caused fault, we may not have permission to + * write to a page needed to reset the permissions of the original + * page. Use swapper_pg_dir to get around this. + */ + load_cr3(swapper_pg_dir); + + set_pte_atomic(pte, pte_mkwrite(*pte)); + SetPageTOI_Dirty(page); + ClearPageTOI_RO(page); + + toi_do_cbw(page); + + load_cr3(pgd); + return 1; + } + return 0; +} + +/** + * toi_make_writable - Handle a (potential) fault caused by TOI's write protection + * + * Make a page writable that was protected. Might be because of a fault, or + * because we're allocating it and want it to be untracked. + * + * Note that in the fault handling case, we don't care about the error code. If + * called from the double fault handler, we won't have one. We just check to + * see if the page was made RO by TOI, and mark it dirty/release the protection + * if it was. + */ +int toi_make_writable(pgd_t *pgd, unsigned long address) +{ + pud_t *pud; + pmd_t *pmd; + pte_t *pte; + + pgd = pgd + pgd_index(address); + if (!pgd_present(*pgd)) + return 0; + + pud = pud_offset(pgd, address); + if (!pud_present(*pud)) + return 0; + + if (pud_large(*pud)) + return _toi_make_writable((pte_t *) pud); + + pmd = pmd_offset(pud, address); + if (!pmd_present(*pmd)) + return 0; + + if (pmd_large(*pmd)) + return _toi_make_writable((pte_t *) pmd); + + pte = pte_offset_kernel(pmd, address); + if (!pte_present(*pte)) + return 0; + + return _toi_make_writable(pte); +} +#endif + static int spurious_fault_check(unsigned long error_code, pte_t *pte) { if ((error_code & PF_WRITE) && !pte_write(*pte)) - return 0; + return 0; if ((error_code & PF_INSTR) && !pte_exec(*pte)) return 0; @@ -1280,6 +1376,15 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code, kmemcheck_hide(regs); prefetchw(&mm->mmap_sem); + /* + * Detect and handle page faults due to TuxOnIce making pages read-only + * so that it can create incremental images. + * + * Do it early to avoid double faults. + */ + if (unlikely(toi_make_writable(init_mm.pgd, address))) + return; + if (unlikely(kmmio_fault(regs, address))) return; diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c index af5c1ed..77fe9c2 100644 --- a/arch/x86/mm/init.c +++ b/arch/x86/mm/init.c @@ -164,8 +164,8 @@ static int page_size_mask; static void __init probe_page_size_mask(void) { /* - * For CONFIG_KMEMCHECK or pagealloc debugging, identity mapping will - * use small pages. + * For CONFIG_KMEMCHECK, TuxOnIce's incremental image support or + * pagealloc debugging, identity mapping will use small pages. * This will simplify cpa(), which otherwise needs to support splitting * large pages into small in interrupt context, etc. */ diff --git a/block/Kconfig.iosched b/block/Kconfig.iosched index a4a8914..299a686 100644 --- a/block/Kconfig.iosched +++ b/block/Kconfig.iosched @@ -40,6 +40,26 @@ config CFQ_GROUP_IOSCHED ---help--- Enable group IO scheduling in CFQ. +config IOSCHED_BFQ_SQ + tristate "BFQ-SQ I/O scheduler" + default n + ---help--- + The BFQ-SQ I/O scheduler (for legacy blk: SQ stands for + SingleQueue) distributes bandwidth among all processes + according to their weights, regardless of the device + parameters and with any workload. It also guarantees a low + latency to interactive and soft real-time applications. + Details in Documentation/block/bfq-iosched.txt + +config BFQ_SQ_GROUP_IOSCHED + bool "BFQ-SQ hierarchical scheduling support" + depends on IOSCHED_BFQ_SQ && BLK_CGROUP + default n + ---help--- + + Enable hierarchical scheduling in BFQ-SQ, using the blkio + (cgroups-v1) or io (cgroups-v2) controller. + choice prompt "Default I/O scheduler" @@ -54,6 +74,16 @@ choice config DEFAULT_CFQ bool "CFQ" if IOSCHED_CFQ=y + config DEFAULT_BFQ_SQ + bool "BFQ-SQ" if IOSCHED_BFQ_SQ=y + help + Selects BFQ-SQ as the default I/O scheduler which will be + used by default for all block devices. + The BFQ-SQ I/O scheduler aims at distributing the bandwidth + as desired, independently of the disk parameters and with + any workload. It also tries to guarantee low latency to + interactive and soft real-time applications. + config DEFAULT_NOOP bool "No-op" @@ -63,8 +93,28 @@ config DEFAULT_IOSCHED string default "deadline" if DEFAULT_DEADLINE default "cfq" if DEFAULT_CFQ + default "bfq-sq" if DEFAULT_BFQ_SQ default "noop" if DEFAULT_NOOP +config MQ_IOSCHED_BFQ + tristate "BFQ-MQ I/O Scheduler" + default y + ---help--- + BFQ I/O scheduler for BLK-MQ. BFQ-MQ distributes bandwidth + among all processes according to their weights, regardless of + the device parameters and with any workload. It also + guarantees a low latency to interactive and soft real-time + applications. Details in Documentation/block/bfq-iosched.txt + +config MQ_BFQ_GROUP_IOSCHED + bool "BFQ-MQ hierarchical scheduling support" + depends on MQ_IOSCHED_BFQ && BLK_CGROUP + default n + ---help--- + + Enable hierarchical scheduling in BFQ-MQ, using the blkio + (cgroups-v1) or io (cgroups-v2) controller. + config MQ_IOSCHED_DEADLINE tristate "MQ deadline I/O scheduler" default y diff --git a/block/Makefile b/block/Makefile index 6a56303..0934daf 100644 --- a/block/Makefile +++ b/block/Makefile @@ -7,7 +7,7 @@ obj-$(CONFIG_BLOCK) := bio.o elevator.o blk-core.o blk-tag.o blk-sysfs.o \ blk-flush.o blk-settings.o blk-ioc.o blk-map.o \ blk-exec.o blk-merge.o blk-softirq.o blk-timeout.o \ blk-lib.o blk-mq.o blk-mq-tag.o blk-stat.o \ - blk-mq-sysfs.o blk-mq-cpumap.o blk-mq-sched.o ioctl.o \ + blk-mq-sysfs.o blk-mq-cpumap.o blk-mq-sched.o ioctl.o uuid.o \ genhd.o partition-generic.o ioprio.o \ badblocks.o partitions/ @@ -24,6 +24,8 @@ obj-$(CONFIG_MQ_IOSCHED_DEADLINE) += mq-deadline.o obj-$(CONFIG_MQ_IOSCHED_KYBER) += kyber-iosched.o bfq-y := bfq-iosched.o bfq-wf2q.o bfq-cgroup.o obj-$(CONFIG_IOSCHED_BFQ) += bfq.o +obj-$(CONFIG_IOSCHED_BFQ_SQ) += bfq-sq-iosched.o +obj-$(CONFIG_MQ_IOSCHED_BFQ) += bfq-mq-iosched.o obj-$(CONFIG_BLOCK_COMPAT) += compat_ioctl.o obj-$(CONFIG_BLK_CMDLINE_PARSER) += cmdline-parser.o diff --git b/block/bfq-cgroup-included.c b/block/bfq-cgroup-included.c new file mode 100644 index 0000000..562b0ce --- /dev/null +++ b/block/bfq-cgroup-included.c @@ -0,0 +1,1346 @@ +/* + * BFQ: CGROUPS support. + * + * Based on ideas and code from CFQ: + * Copyright (C) 2003 Jens Axboe + * + * Copyright (C) 2008 Fabio Checconi + * Paolo Valente + * + * Copyright (C) 2015 Paolo Valente + * + * Copyright (C) 2016 Paolo Valente + * + * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ + * file. + */ + +#if defined(BFQ_GROUP_IOSCHED_ENABLED) && defined(CONFIG_DEBUG_BLK_CGROUP) + +/* bfqg stats flags */ +enum bfqg_stats_flags { + BFQG_stats_waiting = 0, + BFQG_stats_idling, + BFQG_stats_empty, +}; + +#define BFQG_FLAG_FNS(name) \ +static void bfqg_stats_mark_##name(struct bfqg_stats *stats) \ +{ \ + stats->flags |= (1 << BFQG_stats_##name); \ +} \ +static void bfqg_stats_clear_##name(struct bfqg_stats *stats) \ +{ \ + stats->flags &= ~(1 << BFQG_stats_##name); \ +} \ +static int bfqg_stats_##name(struct bfqg_stats *stats) \ +{ \ + return (stats->flags & (1 << BFQG_stats_##name)) != 0; \ +} \ + +BFQG_FLAG_FNS(waiting) +BFQG_FLAG_FNS(idling) +BFQG_FLAG_FNS(empty) +#undef BFQG_FLAG_FNS + +#ifdef BFQ_MQ +/* This should be called with the scheduler lock held. */ +#else +/* This should be called with the queue_lock held. */ +#endif +static void bfqg_stats_update_group_wait_time(struct bfqg_stats *stats) +{ + unsigned long long now; + + if (!bfqg_stats_waiting(stats)) + return; + + now = sched_clock(); + if (time_after64(now, stats->start_group_wait_time)) + blkg_stat_add(&stats->group_wait_time, + now - stats->start_group_wait_time); + bfqg_stats_clear_waiting(stats); +} + +#ifdef BFQ_MQ +/* This should be called with the scheduler lock held. */ +#else +/* This should be called with the queue_lock held. */ +#endif +static void bfqg_stats_set_start_group_wait_time(struct bfq_group *bfqg, + struct bfq_group *curr_bfqg) +{ + struct bfqg_stats *stats = &bfqg->stats; + + if (bfqg_stats_waiting(stats)) + return; + if (bfqg == curr_bfqg) + return; + stats->start_group_wait_time = sched_clock(); + bfqg_stats_mark_waiting(stats); +} + +#ifdef BFQ_MQ +/* This should be called with the scheduler lock held. */ +#else +/* This should be called with the queue_lock held. */ +#endif +static void bfqg_stats_end_empty_time(struct bfqg_stats *stats) +{ + unsigned long long now; + + if (!bfqg_stats_empty(stats)) + return; + + now = sched_clock(); + if (time_after64(now, stats->start_empty_time)) + blkg_stat_add(&stats->empty_time, + now - stats->start_empty_time); + bfqg_stats_clear_empty(stats); +} + +static void bfqg_stats_update_dequeue(struct bfq_group *bfqg) +{ + blkg_stat_add(&bfqg->stats.dequeue, 1); +} + +static void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg) +{ + struct bfqg_stats *stats = &bfqg->stats; + + if (blkg_rwstat_total(&stats->queued)) + return; + + /* + * group is already marked empty. This can happen if bfqq got new + * request in parent group and moved to this group while being added + * to service tree. Just ignore the event and move on. + */ + if (bfqg_stats_empty(stats)) + return; + + stats->start_empty_time = sched_clock(); + bfqg_stats_mark_empty(stats); +} + +static void bfqg_stats_update_idle_time(struct bfq_group *bfqg) +{ + struct bfqg_stats *stats = &bfqg->stats; + + if (bfqg_stats_idling(stats)) { + unsigned long long now = sched_clock(); + + if (time_after64(now, stats->start_idle_time)) + blkg_stat_add(&stats->idle_time, + now - stats->start_idle_time); + bfqg_stats_clear_idling(stats); + } +} + +static void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg) +{ + struct bfqg_stats *stats = &bfqg->stats; + + stats->start_idle_time = sched_clock(); + bfqg_stats_mark_idling(stats); +} + +static void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg) +{ + struct bfqg_stats *stats = &bfqg->stats; + + blkg_stat_add(&stats->avg_queue_size_sum, + blkg_rwstat_total(&stats->queued)); + blkg_stat_add(&stats->avg_queue_size_samples, 1); + bfqg_stats_update_group_wait_time(stats); +} + +static void bfqg_stats_update_io_add(struct bfq_group *bfqg, + struct bfq_queue *bfqq, unsigned int op) +{ + blkg_rwstat_add(&bfqg->stats.queued, op, 1); + bfqg_stats_end_empty_time(&bfqg->stats); + if (!(bfqq == ((struct bfq_data *)bfqg->bfqd)->in_service_queue)) + bfqg_stats_set_start_group_wait_time(bfqg, bfqq_group(bfqq)); +} + +static void bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op) +{ + blkg_rwstat_add(&bfqg->stats.queued, op, -1); +} + +static void bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op) +{ + blkg_rwstat_add(&bfqg->stats.merged, op, 1); +} + +static void bfqg_stats_update_completion(struct bfq_group *bfqg, + uint64_t start_time, uint64_t io_start_time, unsigned int op) +{ + struct bfqg_stats *stats = &bfqg->stats; + unsigned long long now = sched_clock(); + + if (time_after64(now, io_start_time)) + blkg_rwstat_add(&stats->service_time, op, + now - io_start_time); + if (time_after64(io_start_time, start_time)) + blkg_rwstat_add(&stats->wait_time, op, + io_start_time - start_time); +} + +#else /* BFQ_GROUP_IOSCHED_ENABLED && CONFIG_DEBUG_BLK_CGROUP */ + +static inline void bfqg_stats_update_io_add(struct bfq_group *bfqg, + struct bfq_queue *bfqq, unsigned int op) { } +static inline void +bfqg_stats_update_io_remove(struct bfq_group *bfqg, unsigned int op) { } +static inline void +bfqg_stats_update_io_merged(struct bfq_group *bfqg, unsigned int op) { } +static inline void bfqg_stats_update_completion(struct bfq_group *bfqg, + uint64_t start_time, uint64_t io_start_time, + unsigned int op) { } +static inline void +bfqg_stats_set_start_group_wait_time(struct bfq_group *bfqg, + struct bfq_group *curr_bfqg) { } +static inline void bfqg_stats_end_empty_time(struct bfqg_stats *stats) { } +static inline void bfqg_stats_update_dequeue(struct bfq_group *bfqg) { } +static inline void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg) { } +static inline void bfqg_stats_update_idle_time(struct bfq_group *bfqg) { } +static inline void bfqg_stats_set_start_idle_time(struct bfq_group *bfqg) { } +static inline void bfqg_stats_update_avg_queue_size(struct bfq_group *bfqg) { } + +#endif /* BFQ_GROUP_IOSCHED_ENABLED && CONFIG_DEBUG_BLK_CGROUP */ + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +static struct blkcg_policy blkcg_policy_bfq; + +/* + * blk-cgroup policy-related handlers + * The following functions help in converting between blk-cgroup + * internal structures and BFQ-specific structures. + */ + +static struct bfq_group *pd_to_bfqg(struct blkg_policy_data *pd) +{ + return pd ? container_of(pd, struct bfq_group, pd) : NULL; +} + +static struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg) +{ + return pd_to_blkg(&bfqg->pd); +} + +static struct bfq_group *blkg_to_bfqg(struct blkcg_gq *blkg) +{ + struct blkg_policy_data *pd = blkg_to_pd(blkg, &blkcg_policy_bfq); + + return pd_to_bfqg(pd); +} + +/* + * bfq_group handlers + * The following functions help in navigating the bfq_group hierarchy + * by allowing to find the parent of a bfq_group or the bfq_group + * associated to a bfq_queue. + */ + +static struct bfq_group *bfqg_parent(struct bfq_group *bfqg) +{ + struct blkcg_gq *pblkg = bfqg_to_blkg(bfqg)->parent; + + return pblkg ? blkg_to_bfqg(pblkg) : NULL; +} + +static struct bfq_group *bfqq_group(struct bfq_queue *bfqq) +{ + struct bfq_entity *group_entity = bfqq->entity.parent; + + return group_entity ? container_of(group_entity, struct bfq_group, + entity) : + bfqq->bfqd->root_group; +} + +/* + * The following two functions handle get and put of a bfq_group by + * wrapping the related blk-cgroup hooks. + */ + +static void bfqg_get(struct bfq_group *bfqg) +{ +#ifdef BFQ_MQ + bfqg->ref++; +#else + blkg_get(bfqg_to_blkg(bfqg)); +#endif +} + +static void bfqg_put(struct bfq_group *bfqg) +{ +#ifdef BFQ_MQ + bfqg->ref--; + + BUG_ON(bfqg->ref < 0); + if (bfqg->ref == 0) + kfree(bfqg); +#else + blkg_put(bfqg_to_blkg(bfqg)); +#endif +} + +#ifdef BFQ_MQ +static void bfqg_and_blkg_get(struct bfq_group *bfqg) +{ + /* see comments in bfq_bic_update_cgroup for why refcounting bfqg */ + bfqg_get(bfqg); + + blkg_get(bfqg_to_blkg(bfqg)); +} + +static void bfqg_and_blkg_put(struct bfq_group *bfqg) +{ + bfqg_put(bfqg); + + blkg_put(bfqg_to_blkg(bfqg)); +} +#endif + +/* @stats = 0 */ +static void bfqg_stats_reset(struct bfqg_stats *stats) +{ +#ifdef CONFIG_DEBUG_BLK_CGROUP + /* queued stats shouldn't be cleared */ + blkg_rwstat_reset(&stats->merged); + blkg_rwstat_reset(&stats->service_time); + blkg_rwstat_reset(&stats->wait_time); + blkg_stat_reset(&stats->time); + blkg_stat_reset(&stats->avg_queue_size_sum); + blkg_stat_reset(&stats->avg_queue_size_samples); + blkg_stat_reset(&stats->dequeue); + blkg_stat_reset(&stats->group_wait_time); + blkg_stat_reset(&stats->idle_time); + blkg_stat_reset(&stats->empty_time); +#endif +} + +/* @to += @from */ +static void bfqg_stats_add_aux(struct bfqg_stats *to, struct bfqg_stats *from) +{ + if (!to || !from) + return; + +#ifdef CONFIG_DEBUG_BLK_CGROUP + /* queued stats shouldn't be cleared */ + blkg_rwstat_add_aux(&to->merged, &from->merged); + blkg_rwstat_add_aux(&to->service_time, &from->service_time); + blkg_rwstat_add_aux(&to->wait_time, &from->wait_time); + blkg_stat_add_aux(&from->time, &from->time); + blkg_stat_add_aux(&to->avg_queue_size_sum, &from->avg_queue_size_sum); + blkg_stat_add_aux(&to->avg_queue_size_samples, + &from->avg_queue_size_samples); + blkg_stat_add_aux(&to->dequeue, &from->dequeue); + blkg_stat_add_aux(&to->group_wait_time, &from->group_wait_time); + blkg_stat_add_aux(&to->idle_time, &from->idle_time); + blkg_stat_add_aux(&to->empty_time, &from->empty_time); +#endif +} + +/* + * Transfer @bfqg's stats to its parent's dead_stats so that the ancestors' + * recursive stats can still account for the amount used by this bfqg after + * it's gone. + */ +static void bfqg_stats_xfer_dead(struct bfq_group *bfqg) +{ + struct bfq_group *parent; + + if (!bfqg) /* root_group */ + return; + + parent = bfqg_parent(bfqg); + + lockdep_assert_held(bfqg_to_blkg(bfqg)->q->queue_lock); + + if (unlikely(!parent)) + return; + + bfqg_stats_add_aux(&parent->stats, &bfqg->stats); + bfqg_stats_reset(&bfqg->stats); +} + +static void bfq_init_entity(struct bfq_entity *entity, + struct bfq_group *bfqg) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + + entity->weight = entity->new_weight; + entity->orig_weight = entity->new_weight; + if (bfqq) { + bfqq->ioprio = bfqq->new_ioprio; + bfqq->ioprio_class = bfqq->new_ioprio_class; +#ifdef BFQ_MQ + /* + * Make sure that bfqg and its associated blkg do not + * disappear before entity. + */ + bfqg_and_blkg_get(bfqg); +#else + bfqg_get(bfqg); +#endif + } + entity->parent = bfqg->my_entity; /* NULL for root group */ + entity->sched_data = &bfqg->sched_data; +} + +static void bfqg_stats_exit(struct bfqg_stats *stats) +{ +#ifdef CONFIG_DEBUG_BLK_CGROUP + blkg_rwstat_exit(&stats->merged); + blkg_rwstat_exit(&stats->service_time); + blkg_rwstat_exit(&stats->wait_time); + blkg_rwstat_exit(&stats->queued); + blkg_stat_exit(&stats->time); + blkg_stat_exit(&stats->avg_queue_size_sum); + blkg_stat_exit(&stats->avg_queue_size_samples); + blkg_stat_exit(&stats->dequeue); + blkg_stat_exit(&stats->group_wait_time); + blkg_stat_exit(&stats->idle_time); + blkg_stat_exit(&stats->empty_time); +#endif +} + +static int bfqg_stats_init(struct bfqg_stats *stats, gfp_t gfp) +{ +#ifdef CONFIG_DEBUG_BLK_CGROUP + if (blkg_rwstat_init(&stats->merged, gfp) || + blkg_rwstat_init(&stats->service_time, gfp) || + blkg_rwstat_init(&stats->wait_time, gfp) || + blkg_rwstat_init(&stats->queued, gfp) || + blkg_stat_init(&stats->time, gfp) || + blkg_stat_init(&stats->avg_queue_size_sum, gfp) || + blkg_stat_init(&stats->avg_queue_size_samples, gfp) || + blkg_stat_init(&stats->dequeue, gfp) || + blkg_stat_init(&stats->group_wait_time, gfp) || + blkg_stat_init(&stats->idle_time, gfp) || + blkg_stat_init(&stats->empty_time, gfp)) { + bfqg_stats_exit(stats); + return -ENOMEM; + } +#endif + + return 0; +} + +static struct bfq_group_data *cpd_to_bfqgd(struct blkcg_policy_data *cpd) +{ + return cpd ? container_of(cpd, struct bfq_group_data, pd) : NULL; +} + +static struct bfq_group_data *blkcg_to_bfqgd(struct blkcg *blkcg) +{ + return cpd_to_bfqgd(blkcg_to_cpd(blkcg, &blkcg_policy_bfq)); +} + +static struct blkcg_policy_data *bfq_cpd_alloc(gfp_t gfp) +{ + struct bfq_group_data *bgd; + + bgd = kzalloc(sizeof(*bgd), gfp); + if (!bgd) + return NULL; + return &bgd->pd; +} + +static void bfq_cpd_init(struct blkcg_policy_data *cpd) +{ + struct bfq_group_data *d = cpd_to_bfqgd(cpd); + + d->weight = cgroup_subsys_on_dfl(io_cgrp_subsys) ? + CGROUP_WEIGHT_DFL : BFQ_WEIGHT_LEGACY_DFL; +} + +static void bfq_cpd_free(struct blkcg_policy_data *cpd) +{ + kfree(cpd_to_bfqgd(cpd)); +} + +static struct blkg_policy_data *bfq_pd_alloc(gfp_t gfp, int node) +{ + struct bfq_group *bfqg; + + bfqg = kzalloc_node(sizeof(*bfqg), gfp, node); + if (!bfqg) + return NULL; + + if (bfqg_stats_init(&bfqg->stats, gfp)) { + kfree(bfqg); + return NULL; + } + +#ifdef BFQ_MQ + /* see comments in bfq_bic_update_cgroup for why refcounting */ + bfqg_get(bfqg); +#endif + return &bfqg->pd; +} + +static void bfq_pd_init(struct blkg_policy_data *pd) +{ + struct blkcg_gq *blkg; + struct bfq_group *bfqg; + struct bfq_data *bfqd; + struct bfq_entity *entity; + struct bfq_group_data *d; + + blkg = pd_to_blkg(pd); + BUG_ON(!blkg); + bfqg = blkg_to_bfqg(blkg); + bfqd = blkg->q->elevator->elevator_data; + BUG_ON(bfqg == bfqd->root_group); + entity = &bfqg->entity; + d = blkcg_to_bfqgd(blkg->blkcg); + + entity->orig_weight = entity->weight = entity->new_weight = d->weight; + entity->my_sched_data = &bfqg->sched_data; + bfqg->my_entity = entity; /* + * the root_group's will be set to NULL + * in bfq_init_queue() + */ + bfqg->bfqd = bfqd; + bfqg->active_entities = 0; + bfqg->rq_pos_tree = RB_ROOT; +} + +static void bfq_pd_free(struct blkg_policy_data *pd) +{ + struct bfq_group *bfqg = pd_to_bfqg(pd); + + bfqg_stats_exit(&bfqg->stats); +#ifdef BFQ_MQ + bfqg_put(bfqg); +#else + kfree(bfqg); +#endif +} + +static void bfq_pd_reset_stats(struct blkg_policy_data *pd) +{ + struct bfq_group *bfqg = pd_to_bfqg(pd); + + bfqg_stats_reset(&bfqg->stats); +} + +static void bfq_group_set_parent(struct bfq_group *bfqg, + struct bfq_group *parent) +{ + struct bfq_entity *entity; + + BUG_ON(!parent); + BUG_ON(!bfqg); + BUG_ON(bfqg == parent); + + entity = &bfqg->entity; + entity->parent = parent->my_entity; + entity->sched_data = &parent->sched_data; +} + +static struct bfq_group *bfq_lookup_bfqg(struct bfq_data *bfqd, + struct blkcg *blkcg) +{ + struct blkcg_gq *blkg; + + blkg = blkg_lookup(blkcg, bfqd->queue); + if (likely(blkg)) + return blkg_to_bfqg(blkg); + return NULL; +} + +static struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, + struct blkcg *blkcg) +{ + struct bfq_group *bfqg, *parent; + struct bfq_entity *entity; + + bfqg = bfq_lookup_bfqg(bfqd, blkcg); + + if (unlikely(!bfqg)) + return NULL; + + /* + * Update chain of bfq_groups as we might be handling a leaf group + * which, along with some of its relatives, has not been hooked yet + * to the private hierarchy of BFQ. + */ + entity = &bfqg->entity; + for_each_entity(entity) { + bfqg = container_of(entity, struct bfq_group, entity); + BUG_ON(!bfqg); + if (bfqg != bfqd->root_group) { + parent = bfqg_parent(bfqg); + if (!parent) + parent = bfqd->root_group; + BUG_ON(!parent); + bfq_group_set_parent(bfqg, parent); + } + } + + return bfqg; +} + +static void bfq_pos_tree_add_move(struct bfq_data *bfqd, + struct bfq_queue *bfqq); + +static void bfq_bfqq_expire(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + bool compensate, + enum bfqq_expiration reason); + +/** + * bfq_bfqq_move - migrate @bfqq to @bfqg. + * @bfqd: queue descriptor. + * @bfqq: the queue to move. + * @bfqg: the group to move to. + * + * Move @bfqq to @bfqg, deactivating it from its old group and reactivating + * it on the new one. Avoid putting the entity on the old group idle tree. + * +#ifdef BFQ_MQ + * Must be called under the scheduler lock, to make sure that the blkg + * owning @bfqg does not disappear (see comments in + * bfq_bic_update_cgroup on guaranteeing the consistency of blkg + * objects). +#else + * Must be called under the queue lock; the cgroup owning @bfqg must + * not disappear (by now this just means that we are called under + * rcu_read_lock()). +#endif + */ +static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, + struct bfq_group *bfqg) +{ + struct bfq_entity *entity = &bfqq->entity; + + BUG_ON(!bfq_bfqq_busy(bfqq) && !RB_EMPTY_ROOT(&bfqq->sort_list)); + BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list) && !entity->on_st); + BUG_ON(bfq_bfqq_busy(bfqq) && RB_EMPTY_ROOT(&bfqq->sort_list) + && entity->on_st && + bfqq != bfqd->in_service_queue); + BUG_ON(!bfq_bfqq_busy(bfqq) && bfqq == bfqd->in_service_queue); + + /* If bfqq is empty, then bfq_bfqq_expire also invokes + * bfq_del_bfqq_busy, thereby removing bfqq and its entity + * from data structures related to current group. Otherwise we + * need to remove bfqq explicitly with bfq_deactivate_bfqq, as + * we do below. + */ + if (bfqq == bfqd->in_service_queue) + bfq_bfqq_expire(bfqd, bfqd->in_service_queue, + false, BFQ_BFQQ_PREEMPTED); + + BUG_ON(entity->on_st && !bfq_bfqq_busy(bfqq) + && &bfq_entity_service_tree(entity)->idle != + entity->tree); + + BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_bfqq_busy(bfqq)); + + if (bfq_bfqq_busy(bfqq)) + bfq_deactivate_bfqq(bfqd, bfqq, false, false); + else if (entity->on_st) { + BUG_ON(&bfq_entity_service_tree(entity)->idle != + entity->tree); + bfq_put_idle_entity(bfq_entity_service_tree(entity), entity); + } +#ifdef BFQ_MQ + bfqg_and_blkg_put(bfqq_group(bfqq)); +#else + bfqg_put(bfqq_group(bfqq)); +#endif + + entity->parent = bfqg->my_entity; + entity->sched_data = &bfqg->sched_data; +#ifdef BFQ_MQ + /* pin down bfqg and its associated blkg */ + bfqg_and_blkg_get(bfqg); +#else + bfqg_get(bfqg); +#endif + + BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_bfqq_busy(bfqq)); + if (bfq_bfqq_busy(bfqq)) { + bfq_pos_tree_add_move(bfqd, bfqq); + bfq_activate_bfqq(bfqd, bfqq); + } + + if (!bfqd->in_service_queue && !bfqd->rq_in_driver) + bfq_schedule_dispatch(bfqd); + BUG_ON(entity->on_st && !bfq_bfqq_busy(bfqq) + && &bfq_entity_service_tree(entity)->idle != + entity->tree); +} + +/** + * __bfq_bic_change_cgroup - move @bic to @cgroup. + * @bfqd: the queue descriptor. + * @bic: the bic to move. + * @blkcg: the blk-cgroup to move to. + * +#ifdef BFQ_MQ + * Move bic to blkcg, assuming that bfqd->lock is held; which makes + * sure that the reference to cgroup is valid across the call (see + * comments in bfq_bic_update_cgroup on this issue) +#else + * Move bic to blkcg, assuming that bfqd->queue is locked; the caller + * has to make sure that the reference to cgroup is valid across the call. +#endif + * + * NOTE: an alternative approach might have been to store the current + * cgroup in bfqq and getting a reference to it, reducing the lookup + * time here, at the price of slightly more complex code. + */ +static struct bfq_group *__bfq_bic_change_cgroup(struct bfq_data *bfqd, + struct bfq_io_cq *bic, + struct blkcg *blkcg) +{ + struct bfq_queue *async_bfqq = bic_to_bfqq(bic, 0); + struct bfq_queue *sync_bfqq = bic_to_bfqq(bic, 1); + struct bfq_group *bfqg; + struct bfq_entity *entity; + + bfqg = bfq_find_set_group(bfqd, blkcg); + + if (unlikely(!bfqg)) + bfqg = bfqd->root_group; + + if (async_bfqq) { + entity = &async_bfqq->entity; + + if (entity->sched_data != &bfqg->sched_data) { + bic_set_bfqq(bic, NULL, 0); + bfq_log_bfqq(bfqd, async_bfqq, + "bic_change_group: %p %d", + async_bfqq, + async_bfqq->ref); + bfq_put_queue(async_bfqq); + } + } + + if (sync_bfqq) { + entity = &sync_bfqq->entity; + if (entity->sched_data != &bfqg->sched_data) + bfq_bfqq_move(bfqd, sync_bfqq, bfqg); + } + + return bfqg; +} + +static void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio) +{ + struct bfq_data *bfqd = bic_to_bfqd(bic); + struct bfq_group *bfqg = NULL; + uint64_t serial_nr; + + rcu_read_lock(); + serial_nr = bio_blkcg(bio)->css.serial_nr; + + /* + * Check whether blkcg has changed. The condition may trigger + * spuriously on a newly created cic but there's no harm. + */ + if (unlikely(!bfqd) || likely(bic->blkcg_serial_nr == serial_nr)) + goto out; + + bfqg = __bfq_bic_change_cgroup(bfqd, bic, bio_blkcg(bio)); +#ifdef BFQ_MQ + /* + * Update blkg_path for bfq_log_* functions. We cache this + * path, and update it here, for the following + * reasons. Operations on blkg objects in blk-cgroup are + * protected with the request_queue lock, and not with the + * lock that protects the instances of this scheduler + * (bfqd->lock). This exposes BFQ to the following sort of + * race. + * + * The blkg_lookup performed in bfq_get_queue, protected + * through rcu, may happen to return the address of a copy of + * the original blkg. If this is the case, then the + * bfqg_and_blkg_get performed in bfq_get_queue, to pin down + * the blkg, is useless: it does not prevent blk-cgroup code + * from destroying both the original blkg and all objects + * directly or indirectly referred by the copy of the + * blkg. + * + * On the bright side, destroy operations on a blkg invoke, as + * a first step, hooks of the scheduler associated with the + * blkg. And these hooks are executed with bfqd->lock held for + * BFQ. As a consequence, for any blkg associated with the + * request queue this instance of the scheduler is attached + * to, we are guaranteed that such a blkg is not destroyed, and + * that all the pointers it contains are consistent, while we + * are holding bfqd->lock. A blkg_lookup performed with + * bfqd->lock held then returns a fully consistent blkg, which + * remains consistent until this lock is held. + * + * Thanks to the last fact, and to the fact that: (1) bfqg has + * been obtained through a blkg_lookup in the above + * assignment, and (2) bfqd->lock is being held, here we can + * safely use the policy data for the involved blkg (i.e., the + * field bfqg->pd) to get to the blkg associated with bfqg, + * and then we can safely use any field of blkg. After we + * release bfqd->lock, even just getting blkg through this + * bfqg may cause dangling references to be traversed, as + * bfqg->pd may not exist any more. + * + * In view of the above facts, here we cache, in the bfqg, any + * blkg data we may need for this bic, and for its associated + * bfq_queue. As of now, we need to cache only the path of the + * blkg, which is used in the bfq_log_* functions. + * + * Finally, note that bfqg itself needs to be protected from + * destruction on the blkg_free of the original blkg (which + * invokes bfq_pd_free). We use an additional private + * refcounter for bfqg, to let it disappear only after no + * bfq_queue refers to it any longer. + */ + blkg_path(bfqg_to_blkg(bfqg), bfqg->blkg_path, sizeof(bfqg->blkg_path)); +#endif + bic->blkcg_serial_nr = serial_nr; +out: + rcu_read_unlock(); +} + +/** + * bfq_flush_idle_tree - deactivate any entity on the idle tree of @st. + * @st: the service tree being flushed. + */ +static void bfq_flush_idle_tree(struct bfq_service_tree *st) +{ + struct bfq_entity *entity = st->first_idle; + + for (; entity ; entity = st->first_idle) + __bfq_deactivate_entity(entity, false); +} + +/** + * bfq_reparent_leaf_entity - move leaf entity to the root_group. + * @bfqd: the device data structure with the root group. + * @entity: the entity to move. + */ +static void bfq_reparent_leaf_entity(struct bfq_data *bfqd, + struct bfq_entity *entity) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + + BUG_ON(!bfqq); + bfq_bfqq_move(bfqd, bfqq, bfqd->root_group); +} + +/** + * bfq_reparent_active_entities - move to the root group all active + * entities. + * @bfqd: the device data structure with the root group. + * @bfqg: the group to move from. + * @st: the service tree with the entities. + */ +static void bfq_reparent_active_entities(struct bfq_data *bfqd, + struct bfq_group *bfqg, + struct bfq_service_tree *st) +{ + struct rb_root *active = &st->active; + struct bfq_entity *entity = NULL; + + if (!RB_EMPTY_ROOT(&st->active)) + entity = bfq_entity_of(rb_first(active)); + + for (; entity ; entity = bfq_entity_of(rb_first(active))) + bfq_reparent_leaf_entity(bfqd, entity); + + if (bfqg->sched_data.in_service_entity) + bfq_reparent_leaf_entity(bfqd, + bfqg->sched_data.in_service_entity); +} + +/** + * bfq_pd_offline - deactivate the entity associated with @pd, + * and reparent its children entities. + * @pd: descriptor of the policy going offline. + * + * blkio already grabs the queue_lock for us, so no need to use + * RCU-based magic + */ +static void bfq_pd_offline(struct blkg_policy_data *pd) +{ + struct bfq_service_tree *st; + struct bfq_group *bfqg; + struct bfq_data *bfqd; + struct bfq_entity *entity; +#ifdef BFQ_MQ + unsigned long flags; +#endif + int i; + + BUG_ON(!pd); + bfqg = pd_to_bfqg(pd); + BUG_ON(!bfqg); + bfqd = bfqg->bfqd; + BUG_ON(bfqd && !bfqd->root_group); + + entity = bfqg->my_entity; + + if (!entity) /* root group */ + return; + +#ifdef BFQ_MQ + spin_lock_irqsave(&bfqd->lock, flags); +#endif + + /* + * Empty all service_trees belonging to this group before + * deactivating the group itself. + */ + for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) { + BUG_ON(!bfqg->sched_data.service_tree); + st = bfqg->sched_data.service_tree + i; + /* + * The idle tree may still contain bfq_queues belonging + * to exited task because they never migrated to a different + * cgroup from the one being destroyed now. + */ + bfq_flush_idle_tree(st); + + /* + * It may happen that some queues are still active + * (busy) upon group destruction (if the corresponding + * processes have been forced to terminate). We move + * all the leaf entities corresponding to these queues + * to the root_group. + * Also, it may happen that the group has an entity + * in service, which is disconnected from the active + * tree: it must be moved, too. + * There is no need to put the sync queues, as the + * scheduler has taken no reference. + */ + bfq_reparent_active_entities(bfqd, bfqg, st); + BUG_ON(!RB_EMPTY_ROOT(&st->active)); + BUG_ON(!RB_EMPTY_ROOT(&st->idle)); + } + BUG_ON(bfqg->sched_data.next_in_service); + BUG_ON(bfqg->sched_data.in_service_entity); + + __bfq_deactivate_entity(entity, false); + bfq_put_async_queues(bfqd, bfqg); + +#ifdef BFQ_MQ + spin_unlock_irqrestore(&bfqd->lock, flags); +#endif + /* + * @blkg is going offline and will be ignored by + * blkg_[rw]stat_recursive_sum(). Transfer stats to the parent so + * that they don't get lost. If IOs complete after this point, the + * stats for them will be lost. Oh well... + */ + bfqg_stats_xfer_dead(bfqg); +} + +static void bfq_end_wr_async(struct bfq_data *bfqd) +{ + struct blkcg_gq *blkg; + + list_for_each_entry(blkg, &bfqd->queue->blkg_list, q_node) { + struct bfq_group *bfqg = blkg_to_bfqg(blkg); + BUG_ON(!bfqg); + + bfq_end_wr_async_queues(bfqd, bfqg); + } + bfq_end_wr_async_queues(bfqd, bfqd->root_group); +} + +static int bfq_io_show_weight(struct seq_file *sf, void *v) +{ + struct blkcg *blkcg = css_to_blkcg(seq_css(sf)); + struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg); + unsigned int val = 0; + + if (bfqgd) + val = bfqgd->weight; + + seq_printf(sf, "%u\n", val); + + return 0; +} + +static int bfq_io_set_weight_legacy(struct cgroup_subsys_state *css, + struct cftype *cftype, + u64 val) +{ + struct blkcg *blkcg = css_to_blkcg(css); + struct bfq_group_data *bfqgd = blkcg_to_bfqgd(blkcg); + struct blkcg_gq *blkg; + int ret = -ERANGE; + + if (val < BFQ_MIN_WEIGHT || val > BFQ_MAX_WEIGHT) + return ret; + + ret = 0; + spin_lock_irq(&blkcg->lock); + bfqgd->weight = (unsigned short)val; + hlist_for_each_entry(blkg, &blkcg->blkg_list, blkcg_node) { + struct bfq_group *bfqg = blkg_to_bfqg(blkg); + + if (!bfqg) + continue; + /* + * Setting the prio_changed flag of the entity + * to 1 with new_weight == weight would re-set + * the value of the weight to its ioprio mapping. + * Set the flag only if necessary. + */ + if ((unsigned short)val != bfqg->entity.new_weight) { + bfqg->entity.new_weight = (unsigned short)val; + /* + * Make sure that the above new value has been + * stored in bfqg->entity.new_weight before + * setting the prio_changed flag. In fact, + * this flag may be read asynchronously (in + * critical sections protected by a different + * lock than that held here), and finding this + * flag set may cause the execution of the code + * for updating parameters whose value may + * depend also on bfqg->entity.new_weight (in + * __bfq_entity_update_weight_prio). + * This barrier makes sure that the new value + * of bfqg->entity.new_weight is correctly + * seen in that code. + */ + smp_wmb(); + bfqg->entity.prio_changed = 1; + } + } + spin_unlock_irq(&blkcg->lock); + + return ret; +} + +static ssize_t bfq_io_set_weight(struct kernfs_open_file *of, + char *buf, size_t nbytes, + loff_t off) +{ + u64 weight; + /* First unsigned long found in the file is used */ + int ret = kstrtoull(strim(buf), 0, &weight); + + if (ret) + return ret; + + return bfq_io_set_weight_legacy(of_css(of), NULL, weight); +} + +#ifdef CONFIG_DEBUG_BLK_CGROUP +static int bfqg_print_stat(struct seq_file *sf, void *v) +{ + blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_stat, + &blkcg_policy_bfq, seq_cft(sf)->private, false); + return 0; +} + +static int bfqg_print_rwstat(struct seq_file *sf, void *v) +{ + blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), blkg_prfill_rwstat, + &blkcg_policy_bfq, seq_cft(sf)->private, true); + return 0; +} + +static u64 bfqg_prfill_stat_recursive(struct seq_file *sf, + struct blkg_policy_data *pd, int off) +{ + u64 sum = blkg_stat_recursive_sum(pd_to_blkg(pd), + &blkcg_policy_bfq, off); + return __blkg_prfill_u64(sf, pd, sum); +} + +static u64 bfqg_prfill_rwstat_recursive(struct seq_file *sf, + struct blkg_policy_data *pd, int off) +{ + struct blkg_rwstat sum = blkg_rwstat_recursive_sum(pd_to_blkg(pd), + &blkcg_policy_bfq, + off); + return __blkg_prfill_rwstat(sf, pd, &sum); +} + +static int bfqg_print_stat_recursive(struct seq_file *sf, void *v) +{ + blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), + bfqg_prfill_stat_recursive, &blkcg_policy_bfq, + seq_cft(sf)->private, false); + return 0; +} + +static int bfqg_print_rwstat_recursive(struct seq_file *sf, void *v) +{ + blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), + bfqg_prfill_rwstat_recursive, &blkcg_policy_bfq, + seq_cft(sf)->private, true); + return 0; +} + +static u64 bfqg_prfill_sectors(struct seq_file *sf, struct blkg_policy_data *pd, + int off) +{ + u64 sum = blkg_rwstat_total(&pd->blkg->stat_bytes); + + return __blkg_prfill_u64(sf, pd, sum >> 9); +} + +static int bfqg_print_stat_sectors(struct seq_file *sf, void *v) +{ + blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), + bfqg_prfill_sectors, &blkcg_policy_bfq, 0, false); + return 0; +} + +static u64 bfqg_prfill_sectors_recursive(struct seq_file *sf, + struct blkg_policy_data *pd, int off) +{ + struct blkg_rwstat tmp = blkg_rwstat_recursive_sum(pd->blkg, NULL, + offsetof(struct blkcg_gq, stat_bytes)); + u64 sum = atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_READ]) + + atomic64_read(&tmp.aux_cnt[BLKG_RWSTAT_WRITE]); + + return __blkg_prfill_u64(sf, pd, sum >> 9); +} + +static int bfqg_print_stat_sectors_recursive(struct seq_file *sf, void *v) +{ + blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), + bfqg_prfill_sectors_recursive, &blkcg_policy_bfq, 0, + false); + return 0; +} + + +static u64 bfqg_prfill_avg_queue_size(struct seq_file *sf, + struct blkg_policy_data *pd, int off) +{ + struct bfq_group *bfqg = pd_to_bfqg(pd); + u64 samples = blkg_stat_read(&bfqg->stats.avg_queue_size_samples); + u64 v = 0; + + if (samples) { + v = blkg_stat_read(&bfqg->stats.avg_queue_size_sum); + v = div64_u64(v, samples); + } + __blkg_prfill_u64(sf, pd, v); + return 0; +} + +/* print avg_queue_size */ +static int bfqg_print_avg_queue_size(struct seq_file *sf, void *v) +{ + blkcg_print_blkgs(sf, css_to_blkcg(seq_css(sf)), + bfqg_prfill_avg_queue_size, &blkcg_policy_bfq, + 0, false); + return 0; +} +#endif /* CONFIG_DEBUG_BLK_CGROUP */ + +static struct bfq_group * +bfq_create_group_hierarchy(struct bfq_data *bfqd, int node) +{ + int ret; + + ret = blkcg_activate_policy(bfqd->queue, &blkcg_policy_bfq); + if (ret) + return NULL; + + return blkg_to_bfqg(bfqd->queue->root_blkg); +} + +#ifdef BFQ_MQ +#define BFQ_CGROUP_FNAME(param) "bfq-mq."#param +#else +#define BFQ_CGROUP_FNAME(param) "bfq-sq."#param +#endif + +static struct cftype bfq_blkcg_legacy_files[] = { + { + .name = BFQ_CGROUP_FNAME(weight), + .flags = CFTYPE_NOT_ON_ROOT, + .seq_show = bfq_io_show_weight, + .write_u64 = bfq_io_set_weight_legacy, + }, + + /* statistics, covers only the tasks in the bfqg */ + { + .name = BFQ_CGROUP_FNAME(io_service_bytes), + .private = (unsigned long)&blkcg_policy_bfq, + .seq_show = blkg_print_stat_bytes, + }, + { + .name = BFQ_CGROUP_FNAME(io_serviced), + .private = (unsigned long)&blkcg_policy_bfq, + .seq_show = blkg_print_stat_ios, + }, +#ifdef CONFIG_DEBUG_BLK_CGROUP + { + .name = BFQ_CGROUP_FNAME(time), + .private = offsetof(struct bfq_group, stats.time), + .seq_show = bfqg_print_stat, + }, + { + .name = BFQ_CGROUP_FNAME(sectors), + .seq_show = bfqg_print_stat_sectors, + }, + { + .name = BFQ_CGROUP_FNAME(io_service_time), + .private = offsetof(struct bfq_group, stats.service_time), + .seq_show = bfqg_print_rwstat, + }, + { + .name = BFQ_CGROUP_FNAME(io_wait_time), + .private = offsetof(struct bfq_group, stats.wait_time), + .seq_show = bfqg_print_rwstat, + }, + { + .name = BFQ_CGROUP_FNAME(io_merged), + .private = offsetof(struct bfq_group, stats.merged), + .seq_show = bfqg_print_rwstat, + }, + { + .name = BFQ_CGROUP_FNAME(io_queued), + .private = offsetof(struct bfq_group, stats.queued), + .seq_show = bfqg_print_rwstat, + }, +#endif /* CONFIG_DEBUG_BLK_CGROUP */ + + /* the same statictics which cover the bfqg and its descendants */ + { + .name = BFQ_CGROUP_FNAME(io_service_bytes_recursive), + .private = (unsigned long)&blkcg_policy_bfq, + .seq_show = blkg_print_stat_bytes_recursive, + }, + { + .name = BFQ_CGROUP_FNAME(io_serviced_recursive), + .private = (unsigned long)&blkcg_policy_bfq, + .seq_show = blkg_print_stat_ios_recursive, + }, +#ifdef CONFIG_DEBUG_BLK_CGROUP + { + .name = BFQ_CGROUP_FNAME(time_recursive), + .private = offsetof(struct bfq_group, stats.time), + .seq_show = bfqg_print_stat_recursive, + }, + { + .name = BFQ_CGROUP_FNAME(sectors_recursive), + .seq_show = bfqg_print_stat_sectors_recursive, + }, + { + .name = BFQ_CGROUP_FNAME(io_service_time_recursive), + .private = offsetof(struct bfq_group, stats.service_time), + .seq_show = bfqg_print_rwstat_recursive, + }, + { + .name = BFQ_CGROUP_FNAME(io_wait_time_recursive), + .private = offsetof(struct bfq_group, stats.wait_time), + .seq_show = bfqg_print_rwstat_recursive, + }, + { + .name = BFQ_CGROUP_FNAME(io_merged_recursive), + .private = offsetof(struct bfq_group, stats.merged), + .seq_show = bfqg_print_rwstat_recursive, + }, + { + .name = BFQ_CGROUP_FNAME(io_queued_recursive), + .private = offsetof(struct bfq_group, stats.queued), + .seq_show = bfqg_print_rwstat_recursive, + }, + { + .name = BFQ_CGROUP_FNAME(avg_queue_size), + .seq_show = bfqg_print_avg_queue_size, + }, + { + .name = BFQ_CGROUP_FNAME(group_wait_time), + .private = offsetof(struct bfq_group, stats.group_wait_time), + .seq_show = bfqg_print_stat, + }, + { + .name = BFQ_CGROUP_FNAME(idle_time), + .private = offsetof(struct bfq_group, stats.idle_time), + .seq_show = bfqg_print_stat, + }, + { + .name = BFQ_CGROUP_FNAME(empty_time), + .private = offsetof(struct bfq_group, stats.empty_time), + .seq_show = bfqg_print_stat, + }, + { + .name = BFQ_CGROUP_FNAME(dequeue), + .private = offsetof(struct bfq_group, stats.dequeue), + .seq_show = bfqg_print_stat, + }, +#endif /* CONFIG_DEBUG_BLK_CGROUP */ + { } /* terminate */ +}; + +static struct cftype bfq_blkg_files[] = { + { + .name = BFQ_CGROUP_FNAME(weight), + .flags = CFTYPE_NOT_ON_ROOT, + .seq_show = bfq_io_show_weight, + .write = bfq_io_set_weight, + }, + {} /* terminate */ +}; + +#undef BFQ_CGROUP_FNAME + +#else /* BFQ_GROUP_IOSCHED_ENABLED */ + +static void bfq_bfqq_move(struct bfq_data *bfqd, struct bfq_queue *bfqq, + struct bfq_group *bfqg) {} + +static void bfq_init_entity(struct bfq_entity *entity, + struct bfq_group *bfqg) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + + entity->weight = entity->new_weight; + entity->orig_weight = entity->new_weight; + if (bfqq) { + bfqq->ioprio = bfqq->new_ioprio; + bfqq->ioprio_class = bfqq->new_ioprio_class; + } + entity->sched_data = &bfqg->sched_data; +} + +static void bfq_bic_update_cgroup(struct bfq_io_cq *bic, struct bio *bio) {} + +static void bfq_end_wr_async(struct bfq_data *bfqd) +{ + bfq_end_wr_async_queues(bfqd, bfqd->root_group); +} + +static struct bfq_group *bfq_find_set_group(struct bfq_data *bfqd, + struct blkcg *blkcg) +{ + return bfqd->root_group; +} + +static struct bfq_group *bfqq_group(struct bfq_queue *bfqq) +{ + return bfqq->bfqd->root_group; +} + +static struct bfq_group * +bfq_create_group_hierarchy(struct bfq_data *bfqd, int node) +{ + struct bfq_group *bfqg; + int i; + + bfqg = kmalloc_node(sizeof(*bfqg), GFP_KERNEL | __GFP_ZERO, node); + if (!bfqg) + return NULL; + + for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) + bfqg->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT; + + return bfqg; +} +#endif diff --git b/block/bfq-ioc.c b/block/bfq-ioc.c new file mode 100644 index 0000000..fb7bb8f --- /dev/null +++ b/block/bfq-ioc.c @@ -0,0 +1,36 @@ +/* + * BFQ: I/O context handling. + * + * Based on ideas and code from CFQ: + * Copyright (C) 2003 Jens Axboe + * + * Copyright (C) 2008 Fabio Checconi + * Paolo Valente + * + * Copyright (C) 2010 Paolo Valente + */ + +/** + * icq_to_bic - convert iocontext queue structure to bfq_io_cq. + * @icq: the iocontext queue. + */ +static struct bfq_io_cq *icq_to_bic(struct io_cq *icq) +{ + /* bic->icq is the first member, %NULL will convert to %NULL */ + return container_of(icq, struct bfq_io_cq, icq); +} + +/** + * bfq_bic_lookup - search into @ioc a bic associated to @bfqd. + * @bfqd: the lookup key. + * @ioc: the io_context of the process doing I/O. + * + * Queue lock must be held. + */ +static struct bfq_io_cq *bfq_bic_lookup(struct bfq_data *bfqd, + struct io_context *ioc) +{ + if (ioc) + return icq_to_bic(ioc_lookup_icq(ioc, bfqd->queue)); + return NULL; +} diff --git b/block/bfq-mq-iosched.c b/block/bfq-mq-iosched.c new file mode 100644 index 0000000..0fc757a --- /dev/null +++ b/block/bfq-mq-iosched.c @@ -0,0 +1,5887 @@ +/* + * Budget Fair Queueing (BFQ) I/O scheduler. + * + * Based on ideas and code from CFQ: + * Copyright (C) 2003 Jens Axboe + * + * Copyright (C) 2008 Fabio Checconi + * Paolo Valente + * + * Copyright (C) 2015 Paolo Valente + * + * Copyright (C) 2017 Paolo Valente + * + * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ + * file. + * + * BFQ is a proportional-share I/O scheduler, with some extra + * low-latency capabilities. BFQ also supports full hierarchical + * scheduling through cgroups. Next paragraphs provide an introduction + * on BFQ inner workings. Details on BFQ benefits and usage can be + * found in Documentation/block/bfq-iosched.txt. + * + * BFQ is a proportional-share storage-I/O scheduling algorithm based + * on the slice-by-slice service scheme of CFQ. But BFQ assigns + * budgets, measured in number of sectors, to processes instead of + * time slices. The device is not granted to the in-service process + * for a given time slice, but until it has exhausted its assigned + * budget. This change from the time to the service domain enables BFQ + * to distribute the device throughput among processes as desired, + * without any distortion due to throughput fluctuations, or to device + * internal queueing. BFQ uses an ad hoc internal scheduler, called + * B-WF2Q+, to schedule processes according to their budgets. More + * precisely, BFQ schedules queues associated with processes. Thanks to + * the accurate policy of B-WF2Q+, BFQ can afford to assign high + * budgets to I/O-bound processes issuing sequential requests (to + * boost the throughput), and yet guarantee a low latency to + * interactive and soft real-time applications. + * + * NOTE: if the main or only goal, with a given device, is to achieve + * the maximum-possible throughput at all times, then do switch off + * all low-latency heuristics for that device, by setting low_latency + * to 0. + * + * BFQ is described in [1], where also a reference to the initial, more + * theoretical paper on BFQ can be found. The interested reader can find + * in the latter paper full details on the main algorithm, as well as + * formulas of the guarantees and formal proofs of all the properties. + * With respect to the version of BFQ presented in these papers, this + * implementation adds a few more heuristics, such as the one that + * guarantees a low latency to soft real-time applications, and a + * hierarchical extension based on H-WF2Q+. + * + * B-WF2Q+ is based on WF2Q+, that is described in [2], together with + * H-WF2Q+, while the augmented tree used to implement B-WF2Q+ with O(log N) + * complexity derives from the one introduced with EEVDF in [3]. + * + * [1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O + * Scheduler", Proceedings of the First Workshop on Mobile System + * Technologies (MST-2015), May 2015. + * http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdf + * + * http://algogroup.unimo.it/people/paolo/disk_sched/bf1-v1-suite-results.pdf + * + * [2] Jon C.R. Bennett and H. Zhang, ``Hierarchical Packet Fair Queueing + * Algorithms,'' IEEE/ACM Transactions on Networking, 5(5):675-689, + * Oct 1997. + * + * http://www.cs.cmu.edu/~hzhang/papers/TON-97-Oct.ps.gz + * + * [3] I. Stoica and H. Abdel-Wahab, ``Earliest Eligible Virtual Deadline + * First: A Flexible and Accurate Mechanism for Proportional Share + * Resource Allocation,'' technical report. + * + * http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "blk.h" +#include "blk-mq.h" +#include "blk-mq-tag.h" +#include "blk-mq-sched.h" +#include "bfq-mq.h" +#include "blk-wbt.h" + +/* Expiration time of sync (0) and async (1) requests, in ns. */ +static const u64 bfq_fifo_expire[2] = { NSEC_PER_SEC / 4, NSEC_PER_SEC / 8 }; + +/* Maximum backwards seek, in KiB. */ +static const int bfq_back_max = (16 * 1024); + +/* Penalty of a backwards seek, in number of sectors. */ +static const int bfq_back_penalty = 2; + +/* Idling period duration, in ns. */ +static u32 bfq_slice_idle = (NSEC_PER_SEC / 125); + +/* Minimum number of assigned budgets for which stats are safe to compute. */ +static const int bfq_stats_min_budgets = 194; + +/* Default maximum budget values, in sectors and number of requests. */ +static const int bfq_default_max_budget = (16 * 1024); + +/* + * Async to sync throughput distribution is controlled as follows: + * when an async request is served, the entity is charged the number + * of sectors of the request, multiplied by the factor below + */ +static const int bfq_async_charge_factor = 10; + +/* Default timeout values, in jiffies, approximating CFQ defaults. */ +static const int bfq_timeout = (HZ / 8); + +static struct kmem_cache *bfq_pool; + +/* Below this threshold (in ns), we consider thinktime immediate. */ +#define BFQ_MIN_TT (2 * NSEC_PER_MSEC) + +/* hw_tag detection: parallel requests threshold and min samples needed. */ +#define BFQ_HW_QUEUE_THRESHOLD 4 +#define BFQ_HW_QUEUE_SAMPLES 32 + +#define BFQQ_SEEK_THR (sector_t)(8 * 100) +#define BFQQ_SECT_THR_NONROT (sector_t)(2 * 32) +#define BFQQ_CLOSE_THR (sector_t)(8 * 1024) +#define BFQQ_SEEKY(bfqq) (hweight32(bfqq->seek_history) > 32/8) + +/* Min number of samples required to perform peak-rate update */ +#define BFQ_RATE_MIN_SAMPLES 32 +/* Min observation time interval required to perform a peak-rate update (ns) */ +#define BFQ_RATE_MIN_INTERVAL (300*NSEC_PER_MSEC) +/* Target observation time interval for a peak-rate update (ns) */ +#define BFQ_RATE_REF_INTERVAL NSEC_PER_SEC + +/* Shift used for peak rate fixed precision calculations. */ +#define BFQ_RATE_SHIFT 16 + +/* + * By default, BFQ computes the duration of the weight raising for + * interactive applications automatically, using the following formula: + * duration = (R / r) * T, where r is the peak rate of the device, and + * R and T are two reference parameters. + * In particular, R is the peak rate of the reference device (see below), + * and T is a reference time: given the systems that are likely to be + * installed on the reference device according to its speed class, T is + * about the maximum time needed, under BFQ and while reading two files in + * parallel, to load typical large applications on these systems. + * In practice, the slower/faster the device at hand is, the more/less it + * takes to load applications with respect to the reference device. + * Accordingly, the longer/shorter BFQ grants weight raising to interactive + * applications. + * + * BFQ uses four different reference pairs (R, T), depending on: + * . whether the device is rotational or non-rotational; + * . whether the device is slow, such as old or portable HDDs, as well as + * SD cards, or fast, such as newer HDDs and SSDs. + * + * The device's speed class is dynamically (re)detected in + * bfq_update_peak_rate() every time the estimated peak rate is updated. + * + * In the following definitions, R_slow[0]/R_fast[0] and + * T_slow[0]/T_fast[0] are the reference values for a slow/fast + * rotational device, whereas R_slow[1]/R_fast[1] and + * T_slow[1]/T_fast[1] are the reference values for a slow/fast + * non-rotational device. Finally, device_speed_thresh are the + * thresholds used to switch between speed classes. The reference + * rates are not the actual peak rates of the devices used as a + * reference, but slightly lower values. The reason for using these + * slightly lower values is that the peak-rate estimator tends to + * yield slightly lower values than the actual peak rate (it can yield + * the actual peak rate only if there is only one process doing I/O, + * and the process does sequential I/O). + * + * Both the reference peak rates and the thresholds are measured in + * sectors/usec, left-shifted by BFQ_RATE_SHIFT. + */ +static int R_slow[2] = {1000, 10700}; +static int R_fast[2] = {14000, 33000}; +/* + * To improve readability, a conversion function is used to initialize the + * following arrays, which entails that they can be initialized only in a + * function. + */ +static int T_slow[2]; +static int T_fast[2]; +static int device_speed_thresh[2]; + +#define BFQ_SERVICE_TREE_INIT ((struct bfq_service_tree) \ + { RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 }) + +#define RQ_BIC(rq) ((struct bfq_io_cq *) (rq)->elv.priv[0]) +#define RQ_BFQQ(rq) ((rq)->elv.priv[1]) + +/** + * icq_to_bic - convert iocontext queue structure to bfq_io_cq. + * @icq: the iocontext queue. + */ +static struct bfq_io_cq *icq_to_bic(struct io_cq *icq) +{ + /* bic->icq is the first member, %NULL will convert to %NULL */ + return container_of(icq, struct bfq_io_cq, icq); +} + +/** + * bfq_bic_lookup - search into @ioc a bic associated to @bfqd. + * @bfqd: the lookup key. + * @ioc: the io_context of the process doing I/O. + * @q: the request queue. + */ +static struct bfq_io_cq *bfq_bic_lookup(struct bfq_data *bfqd, + struct io_context *ioc, + struct request_queue *q) +{ + if (ioc) { + unsigned long flags; + struct bfq_io_cq *icq; + + spin_lock_irqsave(q->queue_lock, flags); + icq = icq_to_bic(ioc_lookup_icq(ioc, q)); + spin_unlock_irqrestore(q->queue_lock, flags); + + return icq; + } + + return NULL; +} + +/* + * Scheduler run of queue, if there are requests pending and no one in the + * driver that will restart queueing. + */ +static void bfq_schedule_dispatch(struct bfq_data *bfqd) +{ + if (bfqd->queued != 0) { + bfq_log(bfqd, "schedule dispatch"); + blk_mq_run_hw_queues(bfqd->queue, true); + } +} + +#define BFQ_MQ +#include "bfq-sched.c" +#include "bfq-cgroup-included.c" + +#define bfq_class_idle(bfqq) ((bfqq)->ioprio_class == IOPRIO_CLASS_IDLE) +#define bfq_class_rt(bfqq) ((bfqq)->ioprio_class == IOPRIO_CLASS_RT) + +#define bfq_sample_valid(samples) ((samples) > 80) + +/* + * Lifted from AS - choose which of rq1 and rq2 that is best served now. + * We choose the request that is closesr to the head right now. Distance + * behind the head is penalized and only allowed to a certain extent. + */ +static struct request *bfq_choose_req(struct bfq_data *bfqd, + struct request *rq1, + struct request *rq2, + sector_t last) +{ + sector_t s1, s2, d1 = 0, d2 = 0; + unsigned long back_max; +#define BFQ_RQ1_WRAP 0x01 /* request 1 wraps */ +#define BFQ_RQ2_WRAP 0x02 /* request 2 wraps */ + unsigned int wrap = 0; /* bit mask: requests behind the disk head? */ + + if (!rq1 || rq1 == rq2) + return rq2; + if (!rq2) + return rq1; + + if (rq_is_sync(rq1) && !rq_is_sync(rq2)) + return rq1; + else if (rq_is_sync(rq2) && !rq_is_sync(rq1)) + return rq2; + if ((rq1->cmd_flags & REQ_META) && !(rq2->cmd_flags & REQ_META)) + return rq1; + else if ((rq2->cmd_flags & REQ_META) && !(rq1->cmd_flags & REQ_META)) + return rq2; + + s1 = blk_rq_pos(rq1); + s2 = blk_rq_pos(rq2); + + /* + * By definition, 1KiB is 2 sectors. + */ + back_max = bfqd->bfq_back_max * 2; + + /* + * Strict one way elevator _except_ in the case where we allow + * short backward seeks which are biased as twice the cost of a + * similar forward seek. + */ + if (s1 >= last) + d1 = s1 - last; + else if (s1 + back_max >= last) + d1 = (last - s1) * bfqd->bfq_back_penalty; + else + wrap |= BFQ_RQ1_WRAP; + + if (s2 >= last) + d2 = s2 - last; + else if (s2 + back_max >= last) + d2 = (last - s2) * bfqd->bfq_back_penalty; + else + wrap |= BFQ_RQ2_WRAP; + + /* Found required data */ + + /* + * By doing switch() on the bit mask "wrap" we avoid having to + * check two variables for all permutations: --> faster! + */ + switch (wrap) { + case 0: /* common case for CFQ: rq1 and rq2 not wrapped */ + if (d1 < d2) + return rq1; + else if (d2 < d1) + return rq2; + + if (s1 >= s2) + return rq1; + else + return rq2; + + case BFQ_RQ2_WRAP: + return rq1; + case BFQ_RQ1_WRAP: + return rq2; + case (BFQ_RQ1_WRAP|BFQ_RQ2_WRAP): /* both rqs wrapped */ + default: + /* + * Since both rqs are wrapped, + * start with the one that's further behind head + * (--> only *one* back seek required), + * since back seek takes more time than forward. + */ + if (s1 <= s2) + return rq1; + else + return rq2; + } +} + +static struct bfq_queue * +bfq_rq_pos_tree_lookup(struct bfq_data *bfqd, struct rb_root *root, + sector_t sector, struct rb_node **ret_parent, + struct rb_node ***rb_link) +{ + struct rb_node **p, *parent; + struct bfq_queue *bfqq = NULL; + + parent = NULL; + p = &root->rb_node; + while (*p) { + struct rb_node **n; + + parent = *p; + bfqq = rb_entry(parent, struct bfq_queue, pos_node); + + /* + * Sort strictly based on sector. Smallest to the left, + * largest to the right. + */ + if (sector > blk_rq_pos(bfqq->next_rq)) + n = &(*p)->rb_right; + else if (sector < blk_rq_pos(bfqq->next_rq)) + n = &(*p)->rb_left; + else + break; + p = n; + bfqq = NULL; + } + + *ret_parent = parent; + if (rb_link) + *rb_link = p; + + bfq_log(bfqd, "rq_pos_tree_lookup %llu: returning %d", + (unsigned long long) sector, + bfqq ? bfqq->pid : 0); + + return bfqq; +} + +static void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + struct rb_node **p, *parent; + struct bfq_queue *__bfqq; + + if (bfqq->pos_root) { + rb_erase(&bfqq->pos_node, bfqq->pos_root); + bfqq->pos_root = NULL; + } + + if (bfq_class_idle(bfqq)) + return; + if (!bfqq->next_rq) + return; + + bfqq->pos_root = &bfq_bfqq_to_bfqg(bfqq)->rq_pos_tree; + __bfqq = bfq_rq_pos_tree_lookup(bfqd, bfqq->pos_root, + blk_rq_pos(bfqq->next_rq), &parent, &p); + if (!__bfqq) { + rb_link_node(&bfqq->pos_node, parent, p); + rb_insert_color(&bfqq->pos_node, bfqq->pos_root); + } else + bfqq->pos_root = NULL; +} + +/* + * Tell whether there are active queues or groups with differentiated weights. + */ +static bool bfq_differentiated_weights(struct bfq_data *bfqd) +{ + /* + * For weights to differ, at least one of the trees must contain + * at least two nodes. + */ + return (!RB_EMPTY_ROOT(&bfqd->queue_weights_tree) && + (bfqd->queue_weights_tree.rb_node->rb_left || + bfqd->queue_weights_tree.rb_node->rb_right) +#ifdef BFQ_GROUP_IOSCHED_ENABLED + ) || + (!RB_EMPTY_ROOT(&bfqd->group_weights_tree) && + (bfqd->group_weights_tree.rb_node->rb_left || + bfqd->group_weights_tree.rb_node->rb_right) +#endif + ); +} + +/* + * The following function returns true if every queue must receive the + * same share of the throughput (this condition is used when deciding + * whether idling may be disabled, see the comments in the function + * bfq_bfqq_may_idle()). + * + * Such a scenario occurs when: + * 1) all active queues have the same weight, + * 2) all active groups at the same level in the groups tree have the same + * weight, + * 3) all active groups at the same level in the groups tree have the same + * number of children. + * + * Unfortunately, keeping the necessary state for evaluating exactly the + * above symmetry conditions would be quite complex and time-consuming. + * Therefore this function evaluates, instead, the following stronger + * sub-conditions, for which it is much easier to maintain the needed + * state: + * 1) all active queues have the same weight, + * 2) all active groups have the same weight, + * 3) all active groups have at most one active child each. + * In particular, the last two conditions are always true if hierarchical + * support and the cgroups interface are not enabled, thus no state needs + * to be maintained in this case. + */ +static bool bfq_symmetric_scenario(struct bfq_data *bfqd) +{ + return !bfq_differentiated_weights(bfqd); +} + +/* + * If the weight-counter tree passed as input contains no counter for + * the weight of the input entity, then add that counter; otherwise just + * increment the existing counter. + * + * Note that weight-counter trees contain few nodes in mostly symmetric + * scenarios. For example, if all queues have the same weight, then the + * weight-counter tree for the queues may contain at most one node. + * This holds even if low_latency is on, because weight-raised queues + * are not inserted in the tree. + * In most scenarios, the rate at which nodes are created/destroyed + * should be low too. + */ +static void bfq_weights_tree_add(struct bfq_data *bfqd, + struct bfq_entity *entity, + struct rb_root *root) +{ + struct rb_node **new = &(root->rb_node), *parent = NULL; + + /* + * Do not insert if the entity is already associated with a + * counter, which happens if: + * 1) the entity is associated with a queue, + * 2) a request arrival has caused the queue to become both + * non-weight-raised, and hence change its weight, and + * backlogged; in this respect, each of the two events + * causes an invocation of this function, + * 3) this is the invocation of this function caused by the + * second event. This second invocation is actually useless, + * and we handle this fact by exiting immediately. More + * efficient or clearer solutions might possibly be adopted. + */ + if (entity->weight_counter) + return; + + while (*new) { + struct bfq_weight_counter *__counter = container_of(*new, + struct bfq_weight_counter, + weights_node); + parent = *new; + + if (entity->weight == __counter->weight) { + entity->weight_counter = __counter; + goto inc_counter; + } + if (entity->weight < __counter->weight) + new = &((*new)->rb_left); + else + new = &((*new)->rb_right); + } + + entity->weight_counter = kzalloc(sizeof(struct bfq_weight_counter), + GFP_ATOMIC); + + /* + * In the unlucky event of an allocation failure, we just + * exit. This will cause the weight of entity to not be + * considered in bfq_differentiated_weights, which, in its + * turn, causes the scenario to be deemed wrongly symmetric in + * case entity's weight would have been the only weight making + * the scenario asymmetric. On the bright side, no unbalance + * will however occur when entity becomes inactive again (the + * invocation of this function is triggered by an activation + * of entity). In fact, bfq_weights_tree_remove does nothing + * if !entity->weight_counter. + */ + if (unlikely(!entity->weight_counter)) + return; + + entity->weight_counter->weight = entity->weight; + rb_link_node(&entity->weight_counter->weights_node, parent, new); + rb_insert_color(&entity->weight_counter->weights_node, root); + +inc_counter: + entity->weight_counter->num_active++; +} + +/* + * Decrement the weight counter associated with the entity, and, if the + * counter reaches 0, remove the counter from the tree. + * See the comments to the function bfq_weights_tree_add() for considerations + * about overhead. + */ +static void bfq_weights_tree_remove(struct bfq_data *bfqd, + struct bfq_entity *entity, + struct rb_root *root) +{ + if (!entity->weight_counter) + return; + + BUG_ON(RB_EMPTY_ROOT(root)); + BUG_ON(entity->weight_counter->weight != entity->weight); + + BUG_ON(!entity->weight_counter->num_active); + entity->weight_counter->num_active--; + if (entity->weight_counter->num_active > 0) + goto reset_entity_pointer; + + rb_erase(&entity->weight_counter->weights_node, root); + kfree(entity->weight_counter); + +reset_entity_pointer: + entity->weight_counter = NULL; +} + +/* + * Return expired entry, or NULL to just start from scratch in rbtree. + */ +static struct request *bfq_check_fifo(struct bfq_queue *bfqq, + struct request *last) +{ + struct request *rq; + + if (bfq_bfqq_fifo_expire(bfqq)) + return NULL; + + bfq_mark_bfqq_fifo_expire(bfqq); + + rq = rq_entry_fifo(bfqq->fifo.next); + + if (rq == last || ktime_get_ns() < rq->fifo_time) + return NULL; + + bfq_log_bfqq(bfqq->bfqd, bfqq, "check_fifo: returned %p", rq); + BUG_ON(RB_EMPTY_NODE(&rq->rb_node)); + return rq; +} + +static struct request *bfq_find_next_rq(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + struct request *last) +{ + struct rb_node *rbnext = rb_next(&last->rb_node); + struct rb_node *rbprev = rb_prev(&last->rb_node); + struct request *next, *prev = NULL; + + BUG_ON(list_empty(&bfqq->fifo)); + + /* Follow expired path, else get first next available. */ + next = bfq_check_fifo(bfqq, last); + if (next) { + BUG_ON(next == last); + return next; + } + + BUG_ON(RB_EMPTY_NODE(&last->rb_node)); + + if (rbprev) + prev = rb_entry_rq(rbprev); + + if (rbnext) + next = rb_entry_rq(rbnext); + else { + rbnext = rb_first(&bfqq->sort_list); + if (rbnext && rbnext != &last->rb_node) + next = rb_entry_rq(rbnext); + } + + return bfq_choose_req(bfqd, next, prev, blk_rq_pos(last)); +} + +/* see the definition of bfq_async_charge_factor for details */ +static unsigned long bfq_serv_to_charge(struct request *rq, + struct bfq_queue *bfqq) +{ + if (bfq_bfqq_sync(bfqq) || bfqq->wr_coeff > 1) + return blk_rq_sectors(rq); + + /* + * If there are no weight-raised queues, then amplify service + * by just the async charge factor; otherwise amplify service + * by twice the async charge factor, to further reduce latency + * for weight-raised queues. + */ + if (bfqq->bfqd->wr_busy_queues == 0) + return blk_rq_sectors(rq) * bfq_async_charge_factor; + + return blk_rq_sectors(rq) * 2 * bfq_async_charge_factor; +} + +/** + * bfq_updated_next_req - update the queue after a new next_rq selection. + * @bfqd: the device data the queue belongs to. + * @bfqq: the queue to update. + * + * If the first request of a queue changes we make sure that the queue + * has enough budget to serve at least its first request (if the + * request has grown). We do this because if the queue has not enough + * budget for its first request, it has to go through two dispatch + * rounds to actually get it dispatched. + */ +static void bfq_updated_next_req(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + struct bfq_service_tree *st = bfq_entity_service_tree(entity); + struct request *next_rq = bfqq->next_rq; + unsigned long new_budget; + + if (!next_rq) + return; + + if (bfqq == bfqd->in_service_queue) + /* + * In order not to break guarantees, budgets cannot be + * changed after an entity has been selected. + */ + return; + + BUG_ON(entity->tree != &st->active); + BUG_ON(entity == entity->sched_data->in_service_entity); + + new_budget = max_t(unsigned long, bfqq->max_budget, + bfq_serv_to_charge(next_rq, bfqq)); + if (entity->budget != new_budget) { + entity->budget = new_budget; + bfq_log_bfqq(bfqd, bfqq, "updated next rq: new budget %lu", + new_budget); + bfq_requeue_bfqq(bfqd, bfqq, false); + } +} + +static unsigned int bfq_wr_duration(struct bfq_data *bfqd) +{ + u64 dur; + + if (bfqd->bfq_wr_max_time > 0) + return bfqd->bfq_wr_max_time; + + dur = bfqd->RT_prod; + do_div(dur, bfqd->peak_rate); + + /* + * Limit duration between 3 and 13 seconds. Tests show that + * higher values than 13 seconds often yield the opposite of + * the desired result, i.e., worsen responsiveness by letting + * non-interactive and non-soft-real-time applications + * preserve weight raising for a too long time interval. + * + * On the other end, lower values than 3 seconds make it + * difficult for most interactive tasks to complete their jobs + * before weight-raising finishes. + */ + if (dur > msecs_to_jiffies(13000)) + dur = msecs_to_jiffies(13000); + else if (dur < msecs_to_jiffies(3000)) + dur = msecs_to_jiffies(3000); + + return dur; +} + +/* switch back from soft real-time to interactive weight raising */ +static void switch_back_to_interactive_wr(struct bfq_queue *bfqq, + struct bfq_data *bfqd) +{ + bfqq->wr_coeff = bfqd->bfq_wr_coeff; + bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); + bfqq->last_wr_start_finish = bfqq->wr_start_at_switch_to_srt; +} + +static void +bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd, + struct bfq_io_cq *bic, bool bfq_already_existing) +{ + unsigned int old_wr_coeff; + bool busy = bfq_already_existing && bfq_bfqq_busy(bfqq); + + if (bic->saved_has_short_ttime) + bfq_mark_bfqq_has_short_ttime(bfqq); + else + bfq_clear_bfqq_has_short_ttime(bfqq); + + if (bic->saved_IO_bound) + bfq_mark_bfqq_IO_bound(bfqq); + else + bfq_clear_bfqq_IO_bound(bfqq); + + if (unlikely(busy)) + old_wr_coeff = bfqq->wr_coeff; + + bfqq->ttime = bic->saved_ttime; + bfqq->wr_coeff = bic->saved_wr_coeff; + bfqq->wr_start_at_switch_to_srt = bic->saved_wr_start_at_switch_to_srt; + BUG_ON(time_is_after_jiffies(bfqq->wr_start_at_switch_to_srt)); + bfqq->last_wr_start_finish = bic->saved_last_wr_start_finish; + bfqq->wr_cur_max_time = bic->saved_wr_cur_max_time; + BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish)); + + bfq_log_bfqq(bfqq->bfqd, bfqq, + "[%s] bic %p wr_coeff %d start_finish %lu max_time %lu", + __func__, + bic, bfqq->wr_coeff, bfqq->last_wr_start_finish, + bfqq->wr_cur_max_time); + + if (bfqq->wr_coeff > 1 && (bfq_bfqq_in_large_burst(bfqq) || + time_is_before_jiffies(bfqq->last_wr_start_finish + + bfqq->wr_cur_max_time))) { + if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time && + !bfq_bfqq_in_large_burst(bfqq) && + time_is_after_eq_jiffies(bfqq->wr_start_at_switch_to_srt + + bfq_wr_duration(bfqd))) { + switch_back_to_interactive_wr(bfqq, bfqd); + bfq_log_bfqq(bfqq->bfqd, bfqq, + "resume state: switching back to interactive"); + } else { + bfqq->wr_coeff = 1; + bfq_log_bfqq(bfqq->bfqd, bfqq, + "resume state: switching off wr (%lu + %lu < %lu)", + bfqq->last_wr_start_finish, bfqq->wr_cur_max_time, + jiffies); + } + } + + /* make sure weight will be updated, however we got here */ + bfqq->entity.prio_changed = 1; + + if (likely(!busy)) + return; + + if (old_wr_coeff == 1 && bfqq->wr_coeff > 1) { + bfqd->wr_busy_queues++; + BUG_ON(bfqd->wr_busy_queues > bfqd->busy_queues); + } else if (old_wr_coeff > 1 && bfqq->wr_coeff == 1) { + bfqd->wr_busy_queues--; + BUG_ON(bfqd->wr_busy_queues < 0); + } +} + +static int bfqq_process_refs(struct bfq_queue *bfqq) +{ + int process_refs, io_refs; + + lockdep_assert_held(&bfqq->bfqd->lock); + + io_refs = bfqq->allocated; + process_refs = bfqq->ref - io_refs - bfqq->entity.on_st; + BUG_ON(process_refs < 0); + return process_refs; +} + +/* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */ +static void bfq_reset_burst_list(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + struct bfq_queue *item; + struct hlist_node *n; + + hlist_for_each_entry_safe(item, n, &bfqd->burst_list, burst_list_node) + hlist_del_init(&item->burst_list_node); + hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list); + bfqd->burst_size = 1; + bfqd->burst_parent_entity = bfqq->entity.parent; +} + +/* Add bfqq to the list of queues in current burst (see bfq_handle_burst) */ +static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + /* Increment burst size to take into account also bfqq */ + bfqd->burst_size++; + + bfq_log_bfqq(bfqd, bfqq, "add_to_burst %d", bfqd->burst_size); + + BUG_ON(bfqd->burst_size > bfqd->bfq_large_burst_thresh); + + if (bfqd->burst_size == bfqd->bfq_large_burst_thresh) { + struct bfq_queue *pos, *bfqq_item; + struct hlist_node *n; + + /* + * Enough queues have been activated shortly after each + * other to consider this burst as large. + */ + bfqd->large_burst = true; + bfq_log_bfqq(bfqd, bfqq, "add_to_burst: large burst started"); + + /* + * We can now mark all queues in the burst list as + * belonging to a large burst. + */ + hlist_for_each_entry(bfqq_item, &bfqd->burst_list, + burst_list_node) { + bfq_mark_bfqq_in_large_burst(bfqq_item); + bfq_log_bfqq(bfqd, bfqq_item, "marked in large burst"); + } + bfq_mark_bfqq_in_large_burst(bfqq); + bfq_log_bfqq(bfqd, bfqq, "marked in large burst"); + + /* + * From now on, and until the current burst finishes, any + * new queue being activated shortly after the last queue + * was inserted in the burst can be immediately marked as + * belonging to a large burst. So the burst list is not + * needed any more. Remove it. + */ + hlist_for_each_entry_safe(pos, n, &bfqd->burst_list, + burst_list_node) + hlist_del_init(&pos->burst_list_node); + } else /* + * Burst not yet large: add bfqq to the burst list. Do + * not increment the ref counter for bfqq, because bfqq + * is removed from the burst list before freeing bfqq + * in put_queue. + */ + hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list); +} + +/* + * If many queues belonging to the same group happen to be created + * shortly after each other, then the processes associated with these + * queues have typically a common goal. In particular, bursts of queue + * creations are usually caused by services or applications that spawn + * many parallel threads/processes. Examples are systemd during boot, + * or git grep. To help these processes get their job done as soon as + * possible, it is usually better to not grant either weight-raising + * or device idling to their queues. + * + * In this comment we describe, firstly, the reasons why this fact + * holds, and, secondly, the next function, which implements the main + * steps needed to properly mark these queues so that they can then be + * treated in a different way. + * + * The above services or applications benefit mostly from a high + * throughput: the quicker the requests of the activated queues are + * cumulatively served, the sooner the target job of these queues gets + * completed. As a consequence, weight-raising any of these queues, + * which also implies idling the device for it, is almost always + * counterproductive. In most cases it just lowers throughput. + * + * On the other hand, a burst of queue creations may be caused also by + * the start of an application that does not consist of a lot of + * parallel I/O-bound threads. In fact, with a complex application, + * several short processes may need to be executed to start-up the + * application. In this respect, to start an application as quickly as + * possible, the best thing to do is in any case to privilege the I/O + * related to the application with respect to all other + * I/O. Therefore, the best strategy to start as quickly as possible + * an application that causes a burst of queue creations is to + * weight-raise all the queues created during the burst. This is the + * exact opposite of the best strategy for the other type of bursts. + * + * In the end, to take the best action for each of the two cases, the + * two types of bursts need to be distinguished. Fortunately, this + * seems relatively easy, by looking at the sizes of the bursts. In + * particular, we found a threshold such that only bursts with a + * larger size than that threshold are apparently caused by + * services or commands such as systemd or git grep. For brevity, + * hereafter we call just 'large' these bursts. BFQ *does not* + * weight-raise queues whose creation occurs in a large burst. In + * addition, for each of these queues BFQ performs or does not perform + * idling depending on which choice boosts the throughput more. The + * exact choice depends on the device and request pattern at + * hand. + * + * Unfortunately, false positives may occur while an interactive task + * is starting (e.g., an application is being started). The + * consequence is that the queues associated with the task do not + * enjoy weight raising as expected. Fortunately these false positives + * are very rare. They typically occur if some service happens to + * start doing I/O exactly when the interactive task starts. + * + * Turning back to the next function, it implements all the steps + * needed to detect the occurrence of a large burst and to properly + * mark all the queues belonging to it (so that they can then be + * treated in a different way). This goal is achieved by maintaining a + * "burst list" that holds, temporarily, the queues that belong to the + * burst in progress. The list is then used to mark these queues as + * belonging to a large burst if the burst does become large. The main + * steps are the following. + * + * . when the very first queue is created, the queue is inserted into the + * list (as it could be the first queue in a possible burst) + * + * . if the current burst has not yet become large, and a queue Q that does + * not yet belong to the burst is activated shortly after the last time + * at which a new queue entered the burst list, then the function appends + * Q to the burst list + * + * . if, as a consequence of the previous step, the burst size reaches + * the large-burst threshold, then + * + * . all the queues in the burst list are marked as belonging to a + * large burst + * + * . the burst list is deleted; in fact, the burst list already served + * its purpose (keeping temporarily track of the queues in a burst, + * so as to be able to mark them as belonging to a large burst in the + * previous sub-step), and now is not needed any more + * + * . the device enters a large-burst mode + * + * . if a queue Q that does not belong to the burst is created while + * the device is in large-burst mode and shortly after the last time + * at which a queue either entered the burst list or was marked as + * belonging to the current large burst, then Q is immediately marked + * as belonging to a large burst. + * + * . if a queue Q that does not belong to the burst is created a while + * later, i.e., not shortly after, than the last time at which a queue + * either entered the burst list or was marked as belonging to the + * current large burst, then the current burst is deemed as finished and: + * + * . the large-burst mode is reset if set + * + * . the burst list is emptied + * + * . Q is inserted in the burst list, as Q may be the first queue + * in a possible new burst (then the burst list contains just Q + * after this step). + */ +static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + /* + * If bfqq is already in the burst list or is part of a large + * burst, or finally has just been split, then there is + * nothing else to do. + */ + if (!hlist_unhashed(&bfqq->burst_list_node) || + bfq_bfqq_in_large_burst(bfqq) || + time_is_after_eq_jiffies(bfqq->split_time + + msecs_to_jiffies(10))) + return; + + /* + * If bfqq's creation happens late enough, or bfqq belongs to + * a different group than the burst group, then the current + * burst is finished, and related data structures must be + * reset. + * + * In this respect, consider the special case where bfqq is + * the very first queue created after BFQ is selected for this + * device. In this case, last_ins_in_burst and + * burst_parent_entity are not yet significant when we get + * here. But it is easy to verify that, whether or not the + * following condition is true, bfqq will end up being + * inserted into the burst list. In particular the list will + * happen to contain only bfqq. And this is exactly what has + * to happen, as bfqq may be the first queue of the first + * burst. + */ + if (time_is_before_jiffies(bfqd->last_ins_in_burst + + bfqd->bfq_burst_interval) || + bfqq->entity.parent != bfqd->burst_parent_entity) { + bfqd->large_burst = false; + bfq_reset_burst_list(bfqd, bfqq); + bfq_log_bfqq(bfqd, bfqq, + "handle_burst: late activation or different group"); + goto end; + } + + /* + * If we get here, then bfqq is being activated shortly after the + * last queue. So, if the current burst is also large, we can mark + * bfqq as belonging to this large burst immediately. + */ + if (bfqd->large_burst) { + bfq_log_bfqq(bfqd, bfqq, "handle_burst: marked in burst"); + bfq_mark_bfqq_in_large_burst(bfqq); + goto end; + } + + /* + * If we get here, then a large-burst state has not yet been + * reached, but bfqq is being activated shortly after the last + * queue. Then we add bfqq to the burst. + */ + bfq_add_to_burst(bfqd, bfqq); +end: + /* + * At this point, bfqq either has been added to the current + * burst or has caused the current burst to terminate and a + * possible new burst to start. In particular, in the second + * case, bfqq has become the first queue in the possible new + * burst. In both cases last_ins_in_burst needs to be moved + * forward. + */ + bfqd->last_ins_in_burst = jiffies; + +} + +static int bfq_bfqq_budget_left(struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + + return entity->budget - entity->service; +} + +/* + * If enough samples have been computed, return the current max budget + * stored in bfqd, which is dynamically updated according to the + * estimated disk peak rate; otherwise return the default max budget + */ +static int bfq_max_budget(struct bfq_data *bfqd) +{ + if (bfqd->budgets_assigned < bfq_stats_min_budgets) + return bfq_default_max_budget; + else + return bfqd->bfq_max_budget; +} + +/* + * Return min budget, which is a fraction of the current or default + * max budget (trying with 1/32) + */ +static int bfq_min_budget(struct bfq_data *bfqd) +{ + if (bfqd->budgets_assigned < bfq_stats_min_budgets) + return bfq_default_max_budget / 32; + else + return bfqd->bfq_max_budget / 32; +} + +static void bfq_bfqq_expire(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + bool compensate, + enum bfqq_expiration reason); + +/* + * The next function, invoked after the input queue bfqq switches from + * idle to busy, updates the budget of bfqq. The function also tells + * whether the in-service queue should be expired, by returning + * true. The purpose of expiring the in-service queue is to give bfqq + * the chance to possibly preempt the in-service queue, and the reason + * for preempting the in-service queue is to achieve one of the two + * goals below. + * + * 1. Guarantee to bfqq its reserved bandwidth even if bfqq has + * expired because it has remained idle. In particular, bfqq may have + * expired for one of the following two reasons: + * + * - BFQ_BFQQ_NO_MORE_REQUEST bfqq did not enjoy any device idling and + * did not make it to issue a new request before its last request + * was served; + * + * - BFQ_BFQQ_TOO_IDLE bfqq did enjoy device idling, but did not issue + * a new request before the expiration of the idling-time. + * + * Even if bfqq has expired for one of the above reasons, the process + * associated with the queue may be however issuing requests greedily, + * and thus be sensitive to the bandwidth it receives (bfqq may have + * remained idle for other reasons: CPU high load, bfqq not enjoying + * idling, I/O throttling somewhere in the path from the process to + * the I/O scheduler, ...). But if, after every expiration for one of + * the above two reasons, bfqq has to wait for the service of at least + * one full budget of another queue before being served again, then + * bfqq is likely to get a much lower bandwidth or resource time than + * its reserved ones. To address this issue, two countermeasures need + * to be taken. + * + * First, the budget and the timestamps of bfqq need to be updated in + * a special way on bfqq reactivation: they need to be updated as if + * bfqq did not remain idle and did not expire. In fact, if they are + * computed as if bfqq expired and remained idle until reactivation, + * then the process associated with bfqq is treated as if, instead of + * being greedy, it stopped issuing requests when bfqq remained idle, + * and restarts issuing requests only on this reactivation. In other + * words, the scheduler does not help the process recover the "service + * hole" between bfqq expiration and reactivation. As a consequence, + * the process receives a lower bandwidth than its reserved one. In + * contrast, to recover this hole, the budget must be updated as if + * bfqq was not expired at all before this reactivation, i.e., it must + * be set to the value of the remaining budget when bfqq was + * expired. Along the same line, timestamps need to be assigned the + * value they had the last time bfqq was selected for service, i.e., + * before last expiration. Thus timestamps need to be back-shifted + * with respect to their normal computation (see [1] for more details + * on this tricky aspect). + * + * Secondly, to allow the process to recover the hole, the in-service + * queue must be expired too, to give bfqq the chance to preempt it + * immediately. In fact, if bfqq has to wait for a full budget of the + * in-service queue to be completed, then it may become impossible to + * let the process recover the hole, even if the back-shifted + * timestamps of bfqq are lower than those of the in-service queue. If + * this happens for most or all of the holes, then the process may not + * receive its reserved bandwidth. In this respect, it is worth noting + * that, being the service of outstanding requests unpreemptible, a + * little fraction of the holes may however be unrecoverable, thereby + * causing a little loss of bandwidth. + * + * The last important point is detecting whether bfqq does need this + * bandwidth recovery. In this respect, the next function deems the + * process associated with bfqq greedy, and thus allows it to recover + * the hole, if: 1) the process is waiting for the arrival of a new + * request (which implies that bfqq expired for one of the above two + * reasons), and 2) such a request has arrived soon. The first + * condition is controlled through the flag non_blocking_wait_rq, + * while the second through the flag arrived_in_time. If both + * conditions hold, then the function computes the budget in the + * above-described special way, and signals that the in-service queue + * should be expired. Timestamp back-shifting is done later in + * __bfq_activate_entity. + * + * 2. Reduce latency. Even if timestamps are not backshifted to let + * the process associated with bfqq recover a service hole, bfqq may + * however happen to have, after being (re)activated, a lower finish + * timestamp than the in-service queue. That is, the next budget of + * bfqq may have to be completed before the one of the in-service + * queue. If this is the case, then preempting the in-service queue + * allows this goal to be achieved, apart from the unpreemptible, + * outstanding requests mentioned above. + * + * Unfortunately, regardless of which of the above two goals one wants + * to achieve, service trees need first to be updated to know whether + * the in-service queue must be preempted. To have service trees + * correctly updated, the in-service queue must be expired and + * rescheduled, and bfqq must be scheduled too. This is one of the + * most costly operations (in future versions, the scheduling + * mechanism may be re-designed in such a way to make it possible to + * know whether preemption is needed without needing to update service + * trees). In addition, queue preemptions almost always cause random + * I/O, and thus loss of throughput. Because of these facts, the next + * function adopts the following simple scheme to avoid both costly + * operations and too frequent preemptions: it requests the expiration + * of the in-service queue (unconditionally) only for queues that need + * to recover a hole, or that either are weight-raised or deserve to + * be weight-raised. + */ +static bool bfq_bfqq_update_budg_for_activation(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + bool arrived_in_time, + bool wr_or_deserves_wr) +{ + struct bfq_entity *entity = &bfqq->entity; + + if (bfq_bfqq_non_blocking_wait_rq(bfqq) && arrived_in_time) { + /* + * We do not clear the flag non_blocking_wait_rq here, as + * the latter is used in bfq_activate_bfqq to signal + * that timestamps need to be back-shifted (and is + * cleared right after). + */ + + /* + * In next assignment we rely on that either + * entity->service or entity->budget are not updated + * on expiration if bfqq is empty (see + * __bfq_bfqq_recalc_budget). Thus both quantities + * remain unchanged after such an expiration, and the + * following statement therefore assigns to + * entity->budget the remaining budget on such an + * expiration. For clarity, entity->service is not + * updated on expiration in any case, and, in normal + * operation, is reset only when bfqq is selected for + * service (see bfq_get_next_queue). + */ + BUG_ON(bfqq->max_budget < 0); + entity->budget = min_t(unsigned long, + bfq_bfqq_budget_left(bfqq), + bfqq->max_budget); + + BUG_ON(entity->budget < 0); + return true; + } + + BUG_ON(bfqq->max_budget < 0); + entity->budget = max_t(unsigned long, bfqq->max_budget, + bfq_serv_to_charge(bfqq->next_rq, bfqq)); + BUG_ON(entity->budget < 0); + + bfq_clear_bfqq_non_blocking_wait_rq(bfqq); + return wr_or_deserves_wr; +} + +/* + * Return the farthest future time instant according to jiffies + * macros. + */ +static unsigned long bfq_greatest_from_now(void) +{ + return jiffies + MAX_JIFFY_OFFSET; +} + +/* + * Return the farthest past time instant according to jiffies + * macros. + */ +static unsigned long bfq_smallest_from_now(void) +{ + return jiffies - MAX_JIFFY_OFFSET; +} + +static void bfq_update_bfqq_wr_on_rq_arrival(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + unsigned int old_wr_coeff, + bool wr_or_deserves_wr, + bool interactive, + bool in_burst, + bool soft_rt) +{ + if (old_wr_coeff == 1 && wr_or_deserves_wr) { + /* start a weight-raising period */ + if (interactive) { + bfqq->wr_coeff = bfqd->bfq_wr_coeff; + bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); + } else { + /* + * No interactive weight raising in progress + * here: assign minus infinity to + * wr_start_at_switch_to_srt, to make sure + * that, at the end of the soft-real-time + * weight raising periods that is starting + * now, no interactive weight-raising period + * may be wrongly considered as still in + * progress (and thus actually started by + * mistake). + */ + bfqq->wr_start_at_switch_to_srt = + bfq_smallest_from_now(); + bfqq->wr_coeff = bfqd->bfq_wr_coeff * + BFQ_SOFTRT_WEIGHT_FACTOR; + bfqq->wr_cur_max_time = + bfqd->bfq_wr_rt_max_time; + } + /* + * If needed, further reduce budget to make sure it is + * close to bfqq's backlog, so as to reduce the + * scheduling-error component due to a too large + * budget. Do not care about throughput consequences, + * but only about latency. Finally, do not assign a + * too small budget either, to avoid increasing + * latency by causing too frequent expirations. + */ + bfqq->entity.budget = min_t(unsigned long, + bfqq->entity.budget, + 2 * bfq_min_budget(bfqd)); + + bfq_log_bfqq(bfqd, bfqq, + "wrais starting at %lu, rais_max_time %u", + jiffies, + jiffies_to_msecs(bfqq->wr_cur_max_time)); + } else if (old_wr_coeff > 1) { + if (interactive) { /* update wr coeff and duration */ + bfqq->wr_coeff = bfqd->bfq_wr_coeff; + bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); + } else if (in_burst) { + bfqq->wr_coeff = 1; + bfq_log_bfqq(bfqd, bfqq, + "wrais ending at %lu, rais_max_time %u", + jiffies, + jiffies_to_msecs(bfqq-> + wr_cur_max_time)); + } else if (soft_rt) { + /* + * The application is now or still meeting the + * requirements for being deemed soft rt. We + * can then correctly and safely (re)charge + * the weight-raising duration for the + * application with the weight-raising + * duration for soft rt applications. + * + * In particular, doing this recharge now, i.e., + * before the weight-raising period for the + * application finishes, reduces the probability + * of the following negative scenario: + * 1) the weight of a soft rt application is + * raised at startup (as for any newly + * created application), + * 2) since the application is not interactive, + * at a certain time weight-raising is + * stopped for the application, + * 3) at that time the application happens to + * still have pending requests, and hence + * is destined to not have a chance to be + * deemed soft rt before these requests are + * completed (see the comments to the + * function bfq_bfqq_softrt_next_start() + * for details on soft rt detection), + * 4) these pending requests experience a high + * latency because the application is not + * weight-raised while they are pending. + */ + if (bfqq->wr_cur_max_time != + bfqd->bfq_wr_rt_max_time) { + bfqq->wr_start_at_switch_to_srt = + bfqq->last_wr_start_finish; + BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish)); + + bfqq->wr_cur_max_time = + bfqd->bfq_wr_rt_max_time; + bfqq->wr_coeff = bfqd->bfq_wr_coeff * + BFQ_SOFTRT_WEIGHT_FACTOR; + bfq_log_bfqq(bfqd, bfqq, + "switching to soft_rt wr"); + } else + bfq_log_bfqq(bfqd, bfqq, + "moving forward soft_rt wr duration"); + bfqq->last_wr_start_finish = jiffies; + } + } +} + +static bool bfq_bfqq_idle_for_long_time(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + return bfqq->dispatched == 0 && + time_is_before_jiffies( + bfqq->budget_timeout + + bfqd->bfq_wr_min_idle_time); +} + +static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + int old_wr_coeff, + struct request *rq, + bool *interactive) +{ + bool soft_rt, in_burst, wr_or_deserves_wr, + bfqq_wants_to_preempt, + idle_for_long_time = bfq_bfqq_idle_for_long_time(bfqd, bfqq), + /* + * See the comments on + * bfq_bfqq_update_budg_for_activation for + * details on the usage of the next variable. + */ + arrived_in_time = ktime_get_ns() <= + bfqq->ttime.last_end_request + + bfqd->bfq_slice_idle * 3; + + bfq_log_bfqq(bfqd, bfqq, + "bfq_add_request non-busy: " + "jiffies %lu, in_time %d, idle_long %d busyw %d " + "wr_coeff %u", + jiffies, arrived_in_time, + idle_for_long_time, + bfq_bfqq_non_blocking_wait_rq(bfqq), + old_wr_coeff); + + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + + BUG_ON(bfqq == bfqd->in_service_queue); + + /* + * bfqq deserves to be weight-raised if: + * - it is sync, + * - it does not belong to a large burst, + * - it has been idle for enough time or is soft real-time, + * - is linked to a bfq_io_cq (it is not shared in any sense) + */ + in_burst = bfq_bfqq_in_large_burst(bfqq); + soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 && + !in_burst && + time_is_before_jiffies(bfqq->soft_rt_next_start); + *interactive = + !in_burst && + idle_for_long_time; + wr_or_deserves_wr = bfqd->low_latency && + (bfqq->wr_coeff > 1 || + (bfq_bfqq_sync(bfqq) && + bfqq->bic && (*interactive || soft_rt))); + + bfq_log_bfqq(bfqd, bfqq, + "bfq_add_request: " + "in_burst %d, " + "soft_rt %d (next %lu), inter %d, bic %p", + bfq_bfqq_in_large_burst(bfqq), soft_rt, + bfqq->soft_rt_next_start, + *interactive, + bfqq->bic); + + /* + * Using the last flag, update budget and check whether bfqq + * may want to preempt the in-service queue. + */ + bfqq_wants_to_preempt = + bfq_bfqq_update_budg_for_activation(bfqd, bfqq, + arrived_in_time, + wr_or_deserves_wr); + + /* + * If bfqq happened to be activated in a burst, but has been + * idle for much more than an interactive queue, then we + * assume that, in the overall I/O initiated in the burst, the + * I/O associated with bfqq is finished. So bfqq does not need + * to be treated as a queue belonging to a burst + * anymore. Accordingly, we reset bfqq's in_large_burst flag + * if set, and remove bfqq from the burst list if it's + * there. We do not decrement burst_size, because the fact + * that bfqq does not need to belong to the burst list any + * more does not invalidate the fact that bfqq was created in + * a burst. + */ + if (likely(!bfq_bfqq_just_created(bfqq)) && + idle_for_long_time && + time_is_before_jiffies( + bfqq->budget_timeout + + msecs_to_jiffies(10000))) { + hlist_del_init(&bfqq->burst_list_node); + bfq_clear_bfqq_in_large_burst(bfqq); + } + + bfq_clear_bfqq_just_created(bfqq); + + if (!bfq_bfqq_IO_bound(bfqq)) { + if (arrived_in_time) { + bfqq->requests_within_timer++; + if (bfqq->requests_within_timer >= + bfqd->bfq_requests_within_timer) + bfq_mark_bfqq_IO_bound(bfqq); + } else + bfqq->requests_within_timer = 0; + bfq_log_bfqq(bfqd, bfqq, "requests in time %d", + bfqq->requests_within_timer); + } + + if (bfqd->low_latency) { + if (unlikely(time_is_after_jiffies(bfqq->split_time))) + /* wraparound */ + bfqq->split_time = + jiffies - bfqd->bfq_wr_min_idle_time - 1; + + if (time_is_before_jiffies(bfqq->split_time + + bfqd->bfq_wr_min_idle_time)) { + bfq_update_bfqq_wr_on_rq_arrival(bfqd, bfqq, + old_wr_coeff, + wr_or_deserves_wr, + *interactive, + in_burst, + soft_rt); + + if (old_wr_coeff != bfqq->wr_coeff) + bfqq->entity.prio_changed = 1; + } + } + + bfqq->last_idle_bklogged = jiffies; + bfqq->service_from_backlogged = 0; + bfq_clear_bfqq_softrt_update(bfqq); + + bfq_add_bfqq_busy(bfqd, bfqq); + + /* + * Expire in-service queue only if preemption may be needed + * for guarantees. In this respect, the function + * next_queue_may_preempt just checks a simple, necessary + * condition, and not a sufficient condition based on + * timestamps. In fact, for the latter condition to be + * evaluated, timestamps would need first to be updated, and + * this operation is quite costly (see the comments on the + * function bfq_bfqq_update_budg_for_activation). + */ + if (bfqd->in_service_queue && bfqq_wants_to_preempt && + bfqd->in_service_queue->wr_coeff < bfqq->wr_coeff && + next_queue_may_preempt(bfqd)) { + struct bfq_queue *in_serv = + bfqd->in_service_queue; + BUG_ON(in_serv == bfqq); + + bfq_bfqq_expire(bfqd, bfqd->in_service_queue, + false, BFQ_BFQQ_PREEMPTED); + } +} + +static void bfq_add_request(struct request *rq) +{ + struct bfq_queue *bfqq = RQ_BFQQ(rq); + struct bfq_data *bfqd = bfqq->bfqd; + struct request *next_rq, *prev; + unsigned int old_wr_coeff = bfqq->wr_coeff; + bool interactive = false; + + bfq_log_bfqq(bfqd, bfqq, "add_request: size %u %s", + blk_rq_sectors(rq), rq_is_sync(rq) ? "S" : "A"); + + if (bfqq->wr_coeff > 1) /* queue is being weight-raised */ + bfq_log_bfqq(bfqd, bfqq, + "raising period dur %u/%u msec, old coeff %u, w %d(%d)", + jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish), + jiffies_to_msecs(bfqq->wr_cur_max_time), + bfqq->wr_coeff, + bfqq->entity.weight, bfqq->entity.orig_weight); + + bfqq->queued[rq_is_sync(rq)]++; + bfqd->queued++; + + BUG_ON(!RQ_BFQQ(rq)); + BUG_ON(RQ_BFQQ(rq) != bfqq); + WARN_ON(blk_rq_sectors(rq) == 0); + + elv_rb_add(&bfqq->sort_list, rq); + + /* + * Check if this request is a better next-to-serve candidate. + */ + prev = bfqq->next_rq; + next_rq = bfq_choose_req(bfqd, bfqq->next_rq, rq, bfqd->last_position); + BUG_ON(!next_rq); + BUG_ON(!RQ_BFQQ(next_rq)); + BUG_ON(RQ_BFQQ(next_rq) != bfqq); + bfqq->next_rq = next_rq; + + /* + * Adjust priority tree position, if next_rq changes. + */ + if (prev != bfqq->next_rq) + bfq_pos_tree_add_move(bfqd, bfqq); + + if (!bfq_bfqq_busy(bfqq)) /* switching to busy ... */ + bfq_bfqq_handle_idle_busy_switch(bfqd, bfqq, old_wr_coeff, + rq, &interactive); + else { + if (bfqd->low_latency && old_wr_coeff == 1 && !rq_is_sync(rq) && + time_is_before_jiffies( + bfqq->last_wr_start_finish + + bfqd->bfq_wr_min_inter_arr_async)) { + bfqq->wr_coeff = bfqd->bfq_wr_coeff; + bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); + + bfqd->wr_busy_queues++; + BUG_ON(bfqd->wr_busy_queues > bfqd->busy_queues); + bfqq->entity.prio_changed = 1; + bfq_log_bfqq(bfqd, bfqq, + "non-idle wrais starting, " + "wr_max_time %u wr_busy %d", + jiffies_to_msecs(bfqq->wr_cur_max_time), + bfqd->wr_busy_queues); + } + if (prev != bfqq->next_rq) + bfq_updated_next_req(bfqd, bfqq); + } + + /* + * Assign jiffies to last_wr_start_finish in the following + * cases: + * + * . if bfqq is not going to be weight-raised, because, for + * non weight-raised queues, last_wr_start_finish stores the + * arrival time of the last request; as of now, this piece + * of information is used only for deciding whether to + * weight-raise async queues + * + * . if bfqq is not weight-raised, because, if bfqq is now + * switching to weight-raised, then last_wr_start_finish + * stores the time when weight-raising starts + * + * . if bfqq is interactive, because, regardless of whether + * bfqq is currently weight-raised, the weight-raising + * period must start or restart (this case is considered + * separately because it is not detected by the above + * conditions, if bfqq is already weight-raised) + * + * last_wr_start_finish has to be updated also if bfqq is soft + * real-time, because the weight-raising period is constantly + * restarted on idle-to-busy transitions for these queues, but + * this is already done in bfq_bfqq_handle_idle_busy_switch if + * needed. + */ + if (bfqd->low_latency && + (old_wr_coeff == 1 || bfqq->wr_coeff == 1 || interactive)) + bfqq->last_wr_start_finish = jiffies; +} + +static struct request *bfq_find_rq_fmerge(struct bfq_data *bfqd, + struct bio *bio, + struct request_queue *q) +{ + struct bfq_queue *bfqq = bfqd->bio_bfqq; + + BUG_ON(!bfqd->bio_bfqq_set); + + if (bfqq) + return elv_rb_find(&bfqq->sort_list, bio_end_sector(bio)); + + return NULL; +} + +static sector_t get_sdist(sector_t last_pos, struct request *rq) +{ + sector_t sdist = 0; + + if (last_pos) { + if (last_pos < blk_rq_pos(rq)) + sdist = blk_rq_pos(rq) - last_pos; + else + sdist = last_pos - blk_rq_pos(rq); + } + + return sdist; +} + +#if 0 /* Still not clear if we can do without next two functions */ +static void bfq_activate_request(struct request_queue *q, struct request *rq) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + bfqd->rq_in_driver++; +} + +static void bfq_deactivate_request(struct request_queue *q, struct request *rq) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + + BUG_ON(bfqd->rq_in_driver == 0); + bfqd->rq_in_driver--; +} +#endif + +static void bfq_remove_request(struct request_queue *q, + struct request *rq) +{ + struct bfq_queue *bfqq = RQ_BFQQ(rq); + struct bfq_data *bfqd = bfqq->bfqd; + const int sync = rq_is_sync(rq); + + BUG_ON(bfqq->entity.service > bfqq->entity.budget && + bfqq == bfqd->in_service_queue); + + if (bfqq->next_rq == rq) { + bfqq->next_rq = bfq_find_next_rq(bfqd, bfqq, rq); + if (bfqq->next_rq && !RQ_BFQQ(bfqq->next_rq)) { + pr_crit("no bfqq! for next rq %p bfqq %p\n", + bfqq->next_rq, bfqq); + } + + BUG_ON(bfqq->next_rq && !RQ_BFQQ(bfqq->next_rq)); + if (bfqq->next_rq && RQ_BFQQ(bfqq->next_rq) != bfqq) { + pr_crit( + "wrong bfqq! for next rq %p, rq_bfqq %p bfqq %p\n", + bfqq->next_rq, RQ_BFQQ(bfqq->next_rq), bfqq); + } + BUG_ON(bfqq->next_rq && RQ_BFQQ(bfqq->next_rq) != bfqq); + + bfq_updated_next_req(bfqd, bfqq); + } + + if (rq->queuelist.prev != &rq->queuelist) + list_del_init(&rq->queuelist); + BUG_ON(bfqq->queued[sync] == 0); + bfqq->queued[sync]--; + bfqd->queued--; + elv_rb_del(&bfqq->sort_list, rq); + + elv_rqhash_del(q, rq); + if (q->last_merge == rq) + q->last_merge = NULL; + + if (RB_EMPTY_ROOT(&bfqq->sort_list)) { + bfqq->next_rq = NULL; + + BUG_ON(bfqq->entity.budget < 0); + + if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue) { + BUG_ON(bfqq->ref < 2); /* referred by rq and on tree */ + bfq_del_bfqq_busy(bfqd, bfqq, false); + /* + * bfqq emptied. In normal operation, when + * bfqq is empty, bfqq->entity.service and + * bfqq->entity.budget must contain, + * respectively, the service received and the + * budget used last time bfqq emptied. These + * facts do not hold in this case, as at least + * this last removal occurred while bfqq is + * not in service. To avoid inconsistencies, + * reset both bfqq->entity.service and + * bfqq->entity.budget, if bfqq has still a + * process that may issue I/O requests to it. + */ + bfqq->entity.budget = bfqq->entity.service = 0; + } + + /* + * Remove queue from request-position tree as it is empty. + */ + if (bfqq->pos_root) { + rb_erase(&bfqq->pos_node, bfqq->pos_root); + bfqq->pos_root = NULL; + } + } + + if (rq->cmd_flags & REQ_META) { + BUG_ON(bfqq->meta_pending == 0); + bfqq->meta_pending--; + } +} + +static bool bfq_bio_merge(struct blk_mq_hw_ctx *hctx, struct bio *bio) +{ + struct request_queue *q = hctx->queue; + struct bfq_data *bfqd = q->elevator->elevator_data; + struct request *free = NULL; + /* + * bfq_bic_lookup grabs the queue_lock: invoke it now and + * store its return value for later use, to avoid nesting + * queue_lock inside the bfqd->lock. We assume that the bic + * returned by bfq_bic_lookup does not go away before + * bfqd->lock is taken. + */ + struct bfq_io_cq *bic = bfq_bic_lookup(bfqd, current->io_context, q); + bool ret; + + spin_lock_irq(&bfqd->lock); + + if (bic) + bfqd->bio_bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf)); + else + bfqd->bio_bfqq = NULL; + bfqd->bio_bic = bic; + /* Set next flag just for testing purposes */ + bfqd->bio_bfqq_set = true; + + ret = blk_mq_sched_try_merge(q, bio, &free); + + /* + * XXX Not yet freeing without lock held, to avoid an + * inconsistency with respect to the lock-protected invocation + * of blk_mq_sched_try_insert_merge in bfq_bio_merge. Waiting + * for clarifications from Jens. + */ + if (free) + blk_mq_free_request(free); + bfqd->bio_bfqq_set = false; + spin_unlock_irq(&bfqd->lock); + + return ret; +} + +static int bfq_request_merge(struct request_queue *q, struct request **req, + struct bio *bio) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + struct request *__rq; + + __rq = bfq_find_rq_fmerge(bfqd, bio, q); + if (__rq && elv_bio_merge_ok(__rq, bio)) { + *req = __rq; + bfq_log(bfqd, "request_merge: req %p", __rq); + + return ELEVATOR_FRONT_MERGE; + } + + return ELEVATOR_NO_MERGE; +} + +static void bfq_request_merged(struct request_queue *q, struct request *req, + enum elv_merge type) +{ + BUG_ON(req->rq_flags & RQF_DISP_LIST); + + if (type == ELEVATOR_FRONT_MERGE && + rb_prev(&req->rb_node) && + blk_rq_pos(req) < + blk_rq_pos(container_of(rb_prev(&req->rb_node), + struct request, rb_node))) { + struct bfq_queue *bfqq = RQ_BFQQ(req); + struct bfq_data *bfqd = bfqq->bfqd; + struct request *prev, *next_rq; + + /* Reposition request in its sort_list */ + elv_rb_del(&bfqq->sort_list, req); + BUG_ON(!RQ_BFQQ(req)); + BUG_ON(RQ_BFQQ(req) != bfqq); + elv_rb_add(&bfqq->sort_list, req); + + /* Choose next request to be served for bfqq */ + prev = bfqq->next_rq; + next_rq = bfq_choose_req(bfqd, bfqq->next_rq, req, + bfqd->last_position); + BUG_ON(!next_rq); + + bfqq->next_rq = next_rq; + + bfq_log_bfqq(bfqd, bfqq, + "request_merged: req %p prev %p next_rq %p bfqq %p", + req, prev, next_rq, bfqq); + + /* + * If next_rq changes, update both the queue's budget to + * fit the new request and the queue's position in its + * rq_pos_tree. + */ + if (prev != bfqq->next_rq) { + bfq_updated_next_req(bfqd, bfqq); + bfq_pos_tree_add_move(bfqd, bfqq); + } + } +} + +static void bfq_requests_merged(struct request_queue *q, struct request *rq, + struct request *next) +{ + struct bfq_queue *bfqq = RQ_BFQQ(rq), *next_bfqq = RQ_BFQQ(next); + + BUG_ON(!RQ_BFQQ(rq)); + BUG_ON(!RQ_BFQQ(next)); + BUG_ON(rq->rq_flags & RQF_DISP_LIST); + BUG_ON(next->rq_flags & RQF_DISP_LIST); + + if (!RB_EMPTY_NODE(&rq->rb_node)) + goto end; + + bfq_log_bfqq(bfqq->bfqd, bfqq, + "requests_merged: rq %p next %p bfqq %p next_bfqq %p", + rq, next, bfqq, next_bfqq); + + spin_lock_irq(&bfqq->bfqd->lock); + + /* + * If next and rq belong to the same bfq_queue and next is older + * than rq, then reposition rq in the fifo (by substituting next + * with rq). Otherwise, if next and rq belong to different + * bfq_queues, never reposition rq: in fact, we would have to + * reposition it with respect to next's position in its own fifo, + * which would most certainly be too expensive with respect to + * the benefits. + */ + if (bfqq == next_bfqq && + !list_empty(&rq->queuelist) && !list_empty(&next->queuelist) && + next->fifo_time < rq->fifo_time) { + list_del_init(&rq->queuelist); + list_replace_init(&next->queuelist, &rq->queuelist); + rq->fifo_time = next->fifo_time; + } + + if (bfqq->next_rq == next) + bfqq->next_rq = rq; + + bfq_remove_request(q, next); + bfqg_stats_update_io_remove(bfqq_group(bfqq), next->cmd_flags); + + spin_unlock_irq(&bfqq->bfqd->lock); +end: + bfqg_stats_update_io_merged(bfqq_group(bfqq), next->cmd_flags); +} + +/* Must be called with bfqq != NULL */ +static void bfq_bfqq_end_wr(struct bfq_queue *bfqq) +{ + BUG_ON(!bfqq); + + if (bfq_bfqq_busy(bfqq)) { + bfqq->bfqd->wr_busy_queues--; + BUG_ON(bfqq->bfqd->wr_busy_queues < 0); + } + bfqq->wr_coeff = 1; + bfqq->wr_cur_max_time = 0; + bfqq->last_wr_start_finish = jiffies; + /* + * Trigger a weight change on the next invocation of + * __bfq_entity_update_weight_prio. + */ + bfqq->entity.prio_changed = 1; + bfq_log_bfqq(bfqq->bfqd, bfqq, + "end_wr: wrais ending at %lu, rais_max_time %u", + bfqq->last_wr_start_finish, + jiffies_to_msecs(bfqq->wr_cur_max_time)); + bfq_log_bfqq(bfqq->bfqd, bfqq, "end_wr: wr_busy %d", + bfqq->bfqd->wr_busy_queues); +} + +static void bfq_end_wr_async_queues(struct bfq_data *bfqd, + struct bfq_group *bfqg) +{ + int i, j; + + for (i = 0; i < 2; i++) + for (j = 0; j < IOPRIO_BE_NR; j++) + if (bfqg->async_bfqq[i][j]) + bfq_bfqq_end_wr(bfqg->async_bfqq[i][j]); + if (bfqg->async_idle_bfqq) + bfq_bfqq_end_wr(bfqg->async_idle_bfqq); +} + +static void bfq_end_wr(struct bfq_data *bfqd) +{ + struct bfq_queue *bfqq; + + spin_lock_irq(&bfqd->lock); + + list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list) + bfq_bfqq_end_wr(bfqq); + list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list) + bfq_bfqq_end_wr(bfqq); + bfq_end_wr_async(bfqd); + + spin_unlock_irq(&bfqd->lock); +} + +static sector_t bfq_io_struct_pos(void *io_struct, bool request) +{ + if (request) + return blk_rq_pos(io_struct); + else + return ((struct bio *)io_struct)->bi_iter.bi_sector; +} + +static int bfq_rq_close_to_sector(void *io_struct, bool request, + sector_t sector) +{ + return abs(bfq_io_struct_pos(io_struct, request) - sector) <= + BFQQ_CLOSE_THR; +} + +static struct bfq_queue *bfqq_find_close(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + sector_t sector) +{ + struct rb_root *root = &bfq_bfqq_to_bfqg(bfqq)->rq_pos_tree; + struct rb_node *parent, *node; + struct bfq_queue *__bfqq; + + if (RB_EMPTY_ROOT(root)) + return NULL; + + /* + * First, if we find a request starting at the end of the last + * request, choose it. + */ + __bfqq = bfq_rq_pos_tree_lookup(bfqd, root, sector, &parent, NULL); + if (__bfqq) + return __bfqq; + + /* + * If the exact sector wasn't found, the parent of the NULL leaf + * will contain the closest sector (rq_pos_tree sorted by + * next_request position). + */ + __bfqq = rb_entry(parent, struct bfq_queue, pos_node); + if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector)) + return __bfqq; + + if (blk_rq_pos(__bfqq->next_rq) < sector) + node = rb_next(&__bfqq->pos_node); + else + node = rb_prev(&__bfqq->pos_node); + if (!node) + return NULL; + + __bfqq = rb_entry(node, struct bfq_queue, pos_node); + if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector)) + return __bfqq; + + return NULL; +} + +static struct bfq_queue *bfq_find_close_cooperator(struct bfq_data *bfqd, + struct bfq_queue *cur_bfqq, + sector_t sector) +{ + struct bfq_queue *bfqq; + + /* + * We shall notice if some of the queues are cooperating, + * e.g., working closely on the same area of the device. In + * that case, we can group them together and: 1) don't waste + * time idling, and 2) serve the union of their requests in + * the best possible order for throughput. + */ + bfqq = bfqq_find_close(bfqd, cur_bfqq, sector); + if (!bfqq || bfqq == cur_bfqq) + return NULL; + + return bfqq; +} + +static struct bfq_queue * +bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq) +{ + int process_refs, new_process_refs; + struct bfq_queue *__bfqq; + + /* + * If there are no process references on the new_bfqq, then it is + * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain + * may have dropped their last reference (not just their last process + * reference). + */ + if (!bfqq_process_refs(new_bfqq)) + return NULL; + + /* Avoid a circular list and skip interim queue merges. */ + while ((__bfqq = new_bfqq->new_bfqq)) { + if (__bfqq == bfqq) + return NULL; + new_bfqq = __bfqq; + } + + process_refs = bfqq_process_refs(bfqq); + new_process_refs = bfqq_process_refs(new_bfqq); + /* + * If the process for the bfqq has gone away, there is no + * sense in merging the queues. + */ + if (process_refs == 0 || new_process_refs == 0) + return NULL; + + bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d", + new_bfqq->pid); + + /* + * Merging is just a redirection: the requests of the process + * owning one of the two queues are redirected to the other queue. + * The latter queue, in its turn, is set as shared if this is the + * first time that the requests of some process are redirected to + * it. + * + * We redirect bfqq to new_bfqq and not the opposite, because + * we are in the context of the process owning bfqq, thus we + * have the io_cq of this process. So we can immediately + * configure this io_cq to redirect the requests of the + * process to new_bfqq. In contrast, the io_cq of new_bfqq is + * not available any more (new_bfqq->bic == NULL). + * + * Anyway, even in case new_bfqq coincides with the in-service + * queue, redirecting requests the in-service queue is the + * best option, as we feed the in-service queue with new + * requests close to the last request served and, by doing so, + * are likely to increase the throughput. + */ + bfqq->new_bfqq = new_bfqq; + new_bfqq->ref += process_refs; + return new_bfqq; +} + +static bool bfq_may_be_close_cooperator(struct bfq_queue *bfqq, + struct bfq_queue *new_bfqq) +{ + if (bfq_class_idle(bfqq) || bfq_class_idle(new_bfqq) || + (bfqq->ioprio_class != new_bfqq->ioprio_class)) + return false; + + /* + * If either of the queues has already been detected as seeky, + * then merging it with the other queue is unlikely to lead to + * sequential I/O. + */ + if (BFQQ_SEEKY(bfqq) || BFQQ_SEEKY(new_bfqq)) + return false; + + /* + * Interleaved I/O is known to be done by (some) applications + * only for reads, so it does not make sense to merge async + * queues. + */ + if (!bfq_bfqq_sync(bfqq) || !bfq_bfqq_sync(new_bfqq)) + return false; + + return true; +} + +/* + * If this function returns true, then bfqq cannot be merged. The idea + * is that true cooperation happens very early after processes start + * to do I/O. Usually, late cooperations are just accidental false + * positives. In case bfqq is weight-raised, such false positives + * would evidently degrade latency guarantees for bfqq. + */ +static bool wr_from_too_long(struct bfq_queue *bfqq) +{ + return bfqq->wr_coeff > 1 && + time_is_before_jiffies(bfqq->last_wr_start_finish + + msecs_to_jiffies(100)); +} + +/* + * Attempt to schedule a merge of bfqq with the currently in-service + * queue or with a close queue among the scheduled queues. Return + * NULL if no merge was scheduled, a pointer to the shared bfq_queue + * structure otherwise. + * + * The OOM queue is not allowed to participate to cooperation: in fact, since + * the requests temporarily redirected to the OOM queue could be redirected + * again to dedicated queues at any time, the state needed to correctly + * handle merging with the OOM queue would be quite complex and expensive + * to maintain. Besides, in such a critical condition as an out of memory, + * the benefits of queue merging may be little relevant, or even negligible. + * + * Weight-raised queues can be merged only if their weight-raising + * period has just started. In fact cooperating processes are usually + * started together. Thus, with this filter we avoid false positives + * that would jeopardize low-latency guarantees. + * + * WARNING: queue merging may impair fairness among non-weight raised + * queues, for at least two reasons: 1) the original weight of a + * merged queue may change during the merged state, 2) even being the + * weight the same, a merged queue may be bloated with many more + * requests than the ones produced by its originally-associated + * process. + */ +static struct bfq_queue * +bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq, + void *io_struct, bool request) +{ + struct bfq_queue *in_service_bfqq, *new_bfqq; + + if (bfqq->new_bfqq) + return bfqq->new_bfqq; + + if (io_struct && wr_from_too_long(bfqq) && + likely(bfqq != &bfqd->oom_bfqq)) + bfq_log_bfqq(bfqd, bfqq, + "would have looked for coop, but bfq%d wr", + bfqq->pid); + + if (!io_struct || + wr_from_too_long(bfqq) || + unlikely(bfqq == &bfqd->oom_bfqq)) + return NULL; + + /* If there is only one backlogged queue, don't search. */ + if (bfqd->busy_queues == 1) + return NULL; + + in_service_bfqq = bfqd->in_service_queue; + + if (in_service_bfqq && in_service_bfqq != bfqq && + wr_from_too_long(in_service_bfqq) + && likely(in_service_bfqq == &bfqd->oom_bfqq)) + bfq_log_bfqq(bfqd, bfqq, + "would have tried merge with in-service-queue, but wr"); + + if (!in_service_bfqq || in_service_bfqq == bfqq + || wr_from_too_long(in_service_bfqq) || + unlikely(in_service_bfqq == &bfqd->oom_bfqq)) + goto check_scheduled; + + if (bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) && + bfqq->entity.parent == in_service_bfqq->entity.parent && + bfq_may_be_close_cooperator(bfqq, in_service_bfqq)) { + new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq); + if (new_bfqq) + return new_bfqq; + } + /* + * Check whether there is a cooperator among currently scheduled + * queues. The only thing we need is that the bio/request is not + * NULL, as we need it to establish whether a cooperator exists. + */ +check_scheduled: + new_bfqq = bfq_find_close_cooperator(bfqd, bfqq, + bfq_io_struct_pos(io_struct, request)); + + BUG_ON(new_bfqq && bfqq->entity.parent != new_bfqq->entity.parent); + + if (new_bfqq && wr_from_too_long(new_bfqq) && + likely(new_bfqq != &bfqd->oom_bfqq) && + bfq_may_be_close_cooperator(bfqq, new_bfqq)) + bfq_log_bfqq(bfqd, bfqq, + "would have merged with bfq%d, but wr", + new_bfqq->pid); + + if (new_bfqq && !wr_from_too_long(new_bfqq) && + likely(new_bfqq != &bfqd->oom_bfqq) && + bfq_may_be_close_cooperator(bfqq, new_bfqq)) + return bfq_setup_merge(bfqq, new_bfqq); + + return NULL; +} + +static void bfq_bfqq_save_state(struct bfq_queue *bfqq) +{ + struct bfq_io_cq *bic = bfqq->bic; + + /* + * If !bfqq->bic, the queue is already shared or its requests + * have already been redirected to a shared queue; both idle window + * and weight raising state have already been saved. Do nothing. + */ + if (!bic) + return; + + bic->saved_ttime = bfqq->ttime; + bic->saved_has_short_ttime = bfq_bfqq_has_short_ttime(bfqq); + bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq); + bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq); + bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node); + if (unlikely(bfq_bfqq_just_created(bfqq) && + !bfq_bfqq_in_large_burst(bfqq))) { + /* + * bfqq being merged ritgh after being created: bfqq + * would have deserved interactive weight raising, but + * did not make it to be set in a weight-raised state, + * because of this early merge. Store directly the + * weight-raising state that would have been assigned + * to bfqq, so that to avoid that bfqq unjustly fails + * to enjoy weight raising if split soon. + */ + bic->saved_wr_coeff = bfqq->bfqd->bfq_wr_coeff; + bic->saved_wr_cur_max_time = bfq_wr_duration(bfqq->bfqd); + bic->saved_last_wr_start_finish = jiffies; + } else { + bic->saved_wr_coeff = bfqq->wr_coeff; + bic->saved_wr_start_at_switch_to_srt = + bfqq->wr_start_at_switch_to_srt; + bic->saved_last_wr_start_finish = bfqq->last_wr_start_finish; + bic->saved_wr_cur_max_time = bfqq->wr_cur_max_time; + } + BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish)); + bfq_log_bfqq(bfqq->bfqd, bfqq, + "[%s] bic %p wr_coeff %d start_finish %lu max_time %lu", + __func__, + bic, bfqq->wr_coeff, bfqq->last_wr_start_finish, + bfqq->wr_cur_max_time); +} + +static void +bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic, + struct bfq_queue *bfqq, struct bfq_queue *new_bfqq) +{ + bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu", + (unsigned long) new_bfqq->pid); + BUG_ON(bfqq->bic && bfqq->bic == new_bfqq->bic); + /* Save weight raising and idle window of the merged queues */ + bfq_bfqq_save_state(bfqq); + bfq_bfqq_save_state(new_bfqq); + + if (bfq_bfqq_IO_bound(bfqq)) + bfq_mark_bfqq_IO_bound(new_bfqq); + bfq_clear_bfqq_IO_bound(bfqq); + + /* + * If bfqq is weight-raised, then let new_bfqq inherit + * weight-raising. To reduce false positives, neglect the case + * where bfqq has just been created, but has not yet made it + * to be weight-raised (which may happen because EQM may merge + * bfqq even before bfq_add_request is executed for the first + * time for bfqq). Handling this case would however be very + * easy, thanks to the flag just_created. + */ + if (new_bfqq->wr_coeff == 1 && bfqq->wr_coeff > 1) { + new_bfqq->wr_coeff = bfqq->wr_coeff; + new_bfqq->wr_cur_max_time = bfqq->wr_cur_max_time; + new_bfqq->last_wr_start_finish = bfqq->last_wr_start_finish; + new_bfqq->wr_start_at_switch_to_srt = + bfqq->wr_start_at_switch_to_srt; + if (bfq_bfqq_busy(new_bfqq)) { + bfqd->wr_busy_queues++; + BUG_ON(bfqd->wr_busy_queues > bfqd->busy_queues); + } + + new_bfqq->entity.prio_changed = 1; + bfq_log_bfqq(bfqd, new_bfqq, + "wr start after merge with %d, rais_max_time %u", + bfqq->pid, + jiffies_to_msecs(bfqq->wr_cur_max_time)); + } + + if (bfqq->wr_coeff > 1) { /* bfqq has given its wr to new_bfqq */ + bfqq->wr_coeff = 1; + bfqq->entity.prio_changed = 1; + if (bfq_bfqq_busy(bfqq)) { + bfqd->wr_busy_queues--; + BUG_ON(bfqd->wr_busy_queues < 0); + } + + } + + bfq_log_bfqq(bfqd, new_bfqq, "merge_bfqqs: wr_busy %d", + bfqd->wr_busy_queues); + + /* + * Merge queues (that is, let bic redirect its requests to new_bfqq) + */ + bic_set_bfqq(bic, new_bfqq, 1); + bfq_mark_bfqq_coop(new_bfqq); + /* + * new_bfqq now belongs to at least two bics (it is a shared queue): + * set new_bfqq->bic to NULL. bfqq either: + * - does not belong to any bic any more, and hence bfqq->bic must + * be set to NULL, or + * - is a queue whose owning bics have already been redirected to a + * different queue, hence the queue is destined to not belong to + * any bic soon and bfqq->bic is already NULL (therefore the next + * assignment causes no harm). + */ + new_bfqq->bic = NULL; + bfqq->bic = NULL; + /* release process reference to bfqq */ + bfq_put_queue(bfqq); +} + +static bool bfq_allow_bio_merge(struct request_queue *q, struct request *rq, + struct bio *bio) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + bool is_sync = op_is_sync(bio->bi_opf); + struct bfq_queue *bfqq = bfqd->bio_bfqq, *new_bfqq; + + assert_spin_locked(&bfqd->lock); + /* + * Disallow merge of a sync bio into an async request. + */ + if (is_sync && !rq_is_sync(rq)) + return false; + + /* + * Lookup the bfqq that this bio will be queued with. Allow + * merge only if rq is queued there. + */ + BUG_ON(!bfqd->bio_bfqq_set); + if (!bfqq) + return false; + + /* + * We take advantage of this function to perform an early merge + * of the queues of possible cooperating processes. + */ + new_bfqq = bfq_setup_cooperator(bfqd, bfqq, bio, false); + BUG_ON(new_bfqq == bfqq); + if (new_bfqq) { + /* + * bic still points to bfqq, then it has not yet been + * redirected to some other bfq_queue, and a queue + * merge beween bfqq and new_bfqq can be safely + * fulfillled, i.e., bic can be redirected to new_bfqq + * and bfqq can be put. + */ + bfq_merge_bfqqs(bfqd, bfqd->bio_bic, bfqq, + new_bfqq); + /* + * If we get here, bio will be queued into new_queue, + * so use new_bfqq to decide whether bio and rq can be + * merged. + */ + bfqq = new_bfqq; + + /* + * Change also bqfd->bio_bfqq, as + * bfqd->bio_bic now points to new_bfqq, and + * this function may be invoked again (and then may + * use again bqfd->bio_bfqq). + */ + bfqd->bio_bfqq = bfqq; + } + return bfqq == RQ_BFQQ(rq); +} + +/* + * Set the maximum time for the in-service queue to consume its + * budget. This prevents seeky processes from lowering the throughput. + * In practice, a time-slice service scheme is used with seeky + * processes. + */ +static void bfq_set_budget_timeout(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + unsigned int timeout_coeff; + + if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time) + timeout_coeff = 1; + else + timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight; + + bfqd->last_budget_start = ktime_get(); + + bfqq->budget_timeout = jiffies + + bfqd->bfq_timeout * timeout_coeff; + + bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u", + jiffies_to_msecs(bfqd->bfq_timeout * timeout_coeff)); +} + +static void __bfq_set_in_service_queue(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + if (bfqq) { + bfq_clear_bfqq_fifo_expire(bfqq); + + bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8; + + BUG_ON(bfqq == bfqd->in_service_queue); + BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list)); + + if (time_is_before_jiffies(bfqq->last_wr_start_finish) && + bfqq->wr_coeff > 1 && + bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time && + time_is_before_jiffies(bfqq->budget_timeout)) { + /* + * For soft real-time queues, move the start + * of the weight-raising period forward by the + * time the queue has not received any + * service. Otherwise, a relatively long + * service delay is likely to cause the + * weight-raising period of the queue to end, + * because of the short duration of the + * weight-raising period of a soft real-time + * queue. It is worth noting that this move + * is not so dangerous for the other queues, + * because soft real-time queues are not + * greedy. + * + * To not add a further variable, we use the + * overloaded field budget_timeout to + * determine for how long the queue has not + * received service, i.e., how much time has + * elapsed since the queue expired. However, + * this is a little imprecise, because + * budget_timeout is set to jiffies if bfqq + * not only expires, but also remains with no + * request. + */ + if (time_after(bfqq->budget_timeout, + bfqq->last_wr_start_finish)) + bfqq->last_wr_start_finish += + jiffies - bfqq->budget_timeout; + else + bfqq->last_wr_start_finish = jiffies; + + if (time_is_after_jiffies(bfqq->last_wr_start_finish)) { + pr_crit( + "BFQ WARNING:last %lu budget %lu jiffies %lu", + bfqq->last_wr_start_finish, + bfqq->budget_timeout, + jiffies); + pr_crit("diff %lu", jiffies - + max_t(unsigned long, + bfqq->last_wr_start_finish, + bfqq->budget_timeout)); + bfqq->last_wr_start_finish = jiffies; + } + } + + bfq_set_budget_timeout(bfqd, bfqq); + bfq_log_bfqq(bfqd, bfqq, + "set_in_service_queue, cur-budget = %d", + bfqq->entity.budget); + } else + bfq_log(bfqd, "set_in_service_queue: NULL"); + + bfqd->in_service_queue = bfqq; +} + +/* + * Get and set a new queue for service. + */ +static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd) +{ + struct bfq_queue *bfqq = bfq_get_next_queue(bfqd); + + __bfq_set_in_service_queue(bfqd, bfqq); + return bfqq; +} + +static void bfq_arm_slice_timer(struct bfq_data *bfqd) +{ + struct bfq_queue *bfqq = bfqd->in_service_queue; + u32 sl; + + BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list)); + + bfq_mark_bfqq_wait_request(bfqq); + + /* + * We don't want to idle for seeks, but we do want to allow + * fair distribution of slice time for a process doing back-to-back + * seeks. So allow a little bit of time for him to submit a new rq. + * + * To prevent processes with (partly) seeky workloads from + * being too ill-treated, grant them a small fraction of the + * assigned budget before reducing the waiting time to + * BFQ_MIN_TT. This happened to help reduce latency. + */ + sl = bfqd->bfq_slice_idle; + /* + * Unless the queue is being weight-raised or the scenario is + * asymmetric, grant only minimum idle time if the queue + * is seeky. A long idling is preserved for a weight-raised + * queue, or, more in general, in an asymemtric scenario, + * because a long idling is needed for guaranteeing to a queue + * its reserved share of the throughput (in particular, it is + * needed if the queue has a higher weight than some other + * queue). + */ + if (BFQQ_SEEKY(bfqq) && bfqq->wr_coeff == 1 && + bfq_symmetric_scenario(bfqd)) + sl = min_t(u32, sl, BFQ_MIN_TT); + + bfqd->last_idling_start = ktime_get(); + hrtimer_start(&bfqd->idle_slice_timer, ns_to_ktime(sl), + HRTIMER_MODE_REL); + bfqg_stats_set_start_idle_time(bfqq_group(bfqq)); + bfq_log(bfqd, "arm idle: %ld/%ld ms", + sl / NSEC_PER_MSEC, bfqd->bfq_slice_idle / NSEC_PER_MSEC); +} + +/* + * In autotuning mode, max_budget is dynamically recomputed as the + * amount of sectors transferred in timeout at the estimated peak + * rate. This enables BFQ to utilize a full timeslice with a full + * budget, even if the in-service queue is served at peak rate. And + * this maximises throughput with sequential workloads. + */ +static unsigned long bfq_calc_max_budget(struct bfq_data *bfqd) +{ + return (u64)bfqd->peak_rate * USEC_PER_MSEC * + jiffies_to_msecs(bfqd->bfq_timeout)>>BFQ_RATE_SHIFT; +} + +/* + * Update parameters related to throughput and responsiveness, as a + * function of the estimated peak rate. See comments on + * bfq_calc_max_budget(), and on T_slow and T_fast arrays. + */ +static void update_thr_responsiveness_params(struct bfq_data *bfqd) +{ + int dev_type = blk_queue_nonrot(bfqd->queue); + + if (bfqd->bfq_user_max_budget == 0) { + bfqd->bfq_max_budget = + bfq_calc_max_budget(bfqd); + BUG_ON(bfqd->bfq_max_budget < 0); + bfq_log(bfqd, "new max_budget = %d", + bfqd->bfq_max_budget); + } + + if (bfqd->device_speed == BFQ_BFQD_FAST && + bfqd->peak_rate < device_speed_thresh[dev_type]) { + bfqd->device_speed = BFQ_BFQD_SLOW; + bfqd->RT_prod = R_slow[dev_type] * + T_slow[dev_type]; + } else if (bfqd->device_speed == BFQ_BFQD_SLOW && + bfqd->peak_rate > device_speed_thresh[dev_type]) { + bfqd->device_speed = BFQ_BFQD_FAST; + bfqd->RT_prod = R_fast[dev_type] * + T_fast[dev_type]; + } + + bfq_log(bfqd, +"dev_type %s dev_speed_class = %s (%llu sects/sec), thresh %llu setcs/sec", + dev_type == 0 ? "ROT" : "NONROT", + bfqd->device_speed == BFQ_BFQD_FAST ? "FAST" : "SLOW", + bfqd->device_speed == BFQ_BFQD_FAST ? + (USEC_PER_SEC*(u64)R_fast[dev_type])>>BFQ_RATE_SHIFT : + (USEC_PER_SEC*(u64)R_slow[dev_type])>>BFQ_RATE_SHIFT, + (USEC_PER_SEC*(u64)device_speed_thresh[dev_type])>> + BFQ_RATE_SHIFT); +} + +static void bfq_reset_rate_computation(struct bfq_data *bfqd, struct request *rq) +{ + if (rq != NULL) { /* new rq dispatch now, reset accordingly */ + bfqd->last_dispatch = bfqd->first_dispatch = ktime_get_ns() ; + bfqd->peak_rate_samples = 1; + bfqd->sequential_samples = 0; + bfqd->tot_sectors_dispatched = bfqd->last_rq_max_size = + blk_rq_sectors(rq); + } else /* no new rq dispatched, just reset the number of samples */ + bfqd->peak_rate_samples = 0; /* full re-init on next disp. */ + + bfq_log(bfqd, + "reset_rate_computation at end, sample %u/%u tot_sects %llu", + bfqd->peak_rate_samples, bfqd->sequential_samples, + bfqd->tot_sectors_dispatched); +} + +static void bfq_update_rate_reset(struct bfq_data *bfqd, struct request *rq) +{ + u32 rate, weight, divisor; + + /* + * For the convergence property to hold (see comments on + * bfq_update_peak_rate()) and for the assessment to be + * reliable, a minimum number of samples must be present, and + * a minimum amount of time must have elapsed. If not so, do + * not compute new rate. Just reset parameters, to get ready + * for a new evaluation attempt. + */ + if (bfqd->peak_rate_samples < BFQ_RATE_MIN_SAMPLES || + bfqd->delta_from_first < BFQ_RATE_MIN_INTERVAL) { + bfq_log(bfqd, + "update_rate_reset: only resetting, delta_first %lluus samples %d", + bfqd->delta_from_first>>10, bfqd->peak_rate_samples); + goto reset_computation; + } + + /* + * If a new request completion has occurred after last + * dispatch, then, to approximate the rate at which requests + * have been served by the device, it is more precise to + * extend the observation interval to the last completion. + */ + bfqd->delta_from_first = + max_t(u64, bfqd->delta_from_first, + bfqd->last_completion - bfqd->first_dispatch); + + BUG_ON(bfqd->delta_from_first == 0); + /* + * Rate computed in sects/usec, and not sects/nsec, for + * precision issues. + */ + rate = div64_ul(bfqd->tot_sectors_dispatched<delta_from_first, NSEC_PER_USEC)); + + bfq_log(bfqd, +"update_rate_reset: tot_sects %llu delta_first %lluus rate %llu sects/s (%d)", + bfqd->tot_sectors_dispatched, bfqd->delta_from_first>>10, + ((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT), + rate > 20< 20M sectors/sec) + */ + if ((bfqd->sequential_samples < (3 * bfqd->peak_rate_samples)>>2 && + rate <= bfqd->peak_rate) || + rate > 20<peak_rate_samples, bfqd->sequential_samples, + ((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT), + ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT)); + goto reset_computation; + } else { + bfq_log(bfqd, + "update_rate_reset: do update, samples %u/%u rate/peak %llu/%llu", + bfqd->peak_rate_samples, bfqd->sequential_samples, + ((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT), + ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT)); + } + + /* + * We have to update the peak rate, at last! To this purpose, + * we use a low-pass filter. We compute the smoothing constant + * of the filter as a function of the 'weight' of the new + * measured rate. + * + * As can be seen in next formulas, we define this weight as a + * quantity proportional to how sequential the workload is, + * and to how long the observation time interval is. + * + * The weight runs from 0 to 8. The maximum value of the + * weight, 8, yields the minimum value for the smoothing + * constant. At this minimum value for the smoothing constant, + * the measured rate contributes for half of the next value of + * the estimated peak rate. + * + * So, the first step is to compute the weight as a function + * of how sequential the workload is. Note that the weight + * cannot reach 9, because bfqd->sequential_samples cannot + * become equal to bfqd->peak_rate_samples, which, in its + * turn, holds true because bfqd->sequential_samples is not + * incremented for the first sample. + */ + weight = (9 * bfqd->sequential_samples) / bfqd->peak_rate_samples; + + /* + * Second step: further refine the weight as a function of the + * duration of the observation interval. + */ + weight = min_t(u32, 8, + div_u64(weight * bfqd->delta_from_first, + BFQ_RATE_REF_INTERVAL)); + + /* + * Divisor ranging from 10, for minimum weight, to 2, for + * maximum weight. + */ + divisor = 10 - weight; + BUG_ON(divisor == 0); + + /* + * Finally, update peak rate: + * + * peak_rate = peak_rate * (divisor-1) / divisor + rate / divisor + */ + bfqd->peak_rate *= divisor-1; + bfqd->peak_rate /= divisor; + rate /= divisor; /* smoothing constant alpha = 1/divisor */ + + bfq_log(bfqd, + "update_rate_reset: divisor %d tmp_peak_rate %llu tmp_rate %u", + divisor, + ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT), + (u32)((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT)); + + BUG_ON(bfqd->peak_rate == 0); + BUG_ON(bfqd->peak_rate > 20<peak_rate += rate; + update_thr_responsiveness_params(bfqd); + BUG_ON(bfqd->peak_rate > 20<peak_rate_samples == 0) { /* first dispatch */ + bfq_log(bfqd, + "update_peak_rate: goto reset, samples %d", + bfqd->peak_rate_samples) ; + bfq_reset_rate_computation(bfqd, rq); + goto update_last_values; /* will add one sample */ + } + + /* + * Device idle for very long: the observation interval lasting + * up to this dispatch cannot be a valid observation interval + * for computing a new peak rate (similarly to the late- + * completion event in bfq_completed_request()). Go to + * update_rate_and_reset to have the following three steps + * taken: + * - close the observation interval at the last (previous) + * request dispatch or completion + * - compute rate, if possible, for that observation interval + * - start a new observation interval with this dispatch + */ + if (now_ns - bfqd->last_dispatch > 100*NSEC_PER_MSEC && + bfqd->rq_in_driver == 0) { + bfq_log(bfqd, +"update_peak_rate: jumping to updating&resetting delta_last %lluus samples %d", + (now_ns - bfqd->last_dispatch)>>10, + bfqd->peak_rate_samples) ; + goto update_rate_and_reset; + } + + /* Update sampling information */ + bfqd->peak_rate_samples++; + + if ((bfqd->rq_in_driver > 0 || + now_ns - bfqd->last_completion < BFQ_MIN_TT) + && get_sdist(bfqd->last_position, rq) < BFQQ_SEEK_THR) + bfqd->sequential_samples++; + + bfqd->tot_sectors_dispatched += blk_rq_sectors(rq); + + /* Reset max observed rq size every 32 dispatches */ + if (likely(bfqd->peak_rate_samples % 32)) + bfqd->last_rq_max_size = + max_t(u32, blk_rq_sectors(rq), bfqd->last_rq_max_size); + else + bfqd->last_rq_max_size = blk_rq_sectors(rq); + + bfqd->delta_from_first = now_ns - bfqd->first_dispatch; + + bfq_log(bfqd, + "update_peak_rate: added samples %u/%u tot_sects %llu delta_first %lluus", + bfqd->peak_rate_samples, bfqd->sequential_samples, + bfqd->tot_sectors_dispatched, + bfqd->delta_from_first>>10); + + /* Target observation interval not yet reached, go on sampling */ + if (bfqd->delta_from_first < BFQ_RATE_REF_INTERVAL) + goto update_last_values; + +update_rate_and_reset: + bfq_update_rate_reset(bfqd, rq); +update_last_values: + bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq); + bfqd->last_dispatch = now_ns; + + bfq_log(bfqd, + "update_peak_rate: delta_first %lluus last_pos %llu peak_rate %llu", + (now_ns - bfqd->first_dispatch)>>10, + (unsigned long long) bfqd->last_position, + ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT)); + bfq_log(bfqd, + "update_peak_rate: samples at end %d", bfqd->peak_rate_samples); +} + +/* + * Remove request from internal lists. + */ +static void bfq_dispatch_remove(struct request_queue *q, struct request *rq) +{ + struct bfq_queue *bfqq = RQ_BFQQ(rq); + + /* + * For consistency, the next instruction should have been + * executed after removing the request from the queue and + * dispatching it. We execute instead this instruction before + * bfq_remove_request() (and hence introduce a temporary + * inconsistency), for efficiency. In fact, should this + * dispatch occur for a non in-service bfqq, this anticipated + * increment prevents two counters related to bfqq->dispatched + * from risking to be, first, uselessly decremented, and then + * incremented again when the (new) value of bfqq->dispatched + * happens to be taken into account. + */ + bfqq->dispatched++; + bfq_update_peak_rate(q->elevator->elevator_data, rq); + + bfq_remove_request(q, rq); +} + +static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + BUG_ON(bfqq != bfqd->in_service_queue); + + /* + * If this bfqq is shared between multiple processes, check + * to make sure that those processes are still issuing I/Os + * within the mean seek distance. If not, it may be time to + * break the queues apart again. + */ + if (bfq_bfqq_coop(bfqq) && BFQQ_SEEKY(bfqq)) + bfq_mark_bfqq_split_coop(bfqq); + + if (RB_EMPTY_ROOT(&bfqq->sort_list)) { + if (bfqq->dispatched == 0) + /* + * Overloading budget_timeout field to store + * the time at which the queue remains with no + * backlog and no outstanding request; used by + * the weight-raising mechanism. + */ + bfqq->budget_timeout = jiffies; + + bfq_del_bfqq_busy(bfqd, bfqq, true); + } else { + bfq_requeue_bfqq(bfqd, bfqq, true); + /* + * Resort priority tree of potential close cooperators. + */ + bfq_pos_tree_add_move(bfqd, bfqq); + } + + /* + * All in-service entities must have been properly deactivated + * or requeued before executing the next function, which + * resets all in-service entites as no more in service. + */ + __bfq_bfqd_reset_in_service(bfqd); +} + +/** + * __bfq_bfqq_recalc_budget - try to adapt the budget to the @bfqq behavior. + * @bfqd: device data. + * @bfqq: queue to update. + * @reason: reason for expiration. + * + * Handle the feedback on @bfqq budget at queue expiration. + * See the body for detailed comments. + */ +static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + enum bfqq_expiration reason) +{ + struct request *next_rq; + int budget, min_budget; + + BUG_ON(bfqq != bfqd->in_service_queue); + + min_budget = bfq_min_budget(bfqd); + + if (bfqq->wr_coeff == 1) + budget = bfqq->max_budget; + else /* + * Use a constant, low budget for weight-raised queues, + * to help achieve a low latency. Keep it slightly higher + * than the minimum possible budget, to cause a little + * bit fewer expirations. + */ + budget = 2 * min_budget; + + bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last budg %d, budg left %d", + bfqq->entity.budget, bfq_bfqq_budget_left(bfqq)); + bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last max_budg %d, min budg %d", + budget, bfq_min_budget(bfqd)); + bfq_log_bfqq(bfqd, bfqq, "recalc_budg: sync %d, seeky %d", + bfq_bfqq_sync(bfqq), BFQQ_SEEKY(bfqd->in_service_queue)); + + if (bfq_bfqq_sync(bfqq) && bfqq->wr_coeff == 1) { + switch (reason) { + /* + * Caveat: in all the following cases we trade latency + * for throughput. + */ + case BFQ_BFQQ_TOO_IDLE: + /* + * This is the only case where we may reduce + * the budget: if there is no request of the + * process still waiting for completion, then + * we assume (tentatively) that the timer has + * expired because the batch of requests of + * the process could have been served with a + * smaller budget. Hence, betting that + * process will behave in the same way when it + * becomes backlogged again, we reduce its + * next budget. As long as we guess right, + * this budget cut reduces the latency + * experienced by the process. + * + * However, if there are still outstanding + * requests, then the process may have not yet + * issued its next request just because it is + * still waiting for the completion of some of + * the still outstanding ones. So in this + * subcase we do not reduce its budget, on the + * contrary we increase it to possibly boost + * the throughput, as discussed in the + * comments to the BUDGET_TIMEOUT case. + */ + if (bfqq->dispatched > 0) /* still outstanding reqs */ + budget = min(budget * 2, bfqd->bfq_max_budget); + else { + if (budget > 5 * min_budget) + budget -= 4 * min_budget; + else + budget = min_budget; + } + break; + case BFQ_BFQQ_BUDGET_TIMEOUT: + /* + * We double the budget here because it gives + * the chance to boost the throughput if this + * is not a seeky process (and has bumped into + * this timeout because of, e.g., ZBR). + */ + budget = min(budget * 2, bfqd->bfq_max_budget); + break; + case BFQ_BFQQ_BUDGET_EXHAUSTED: + /* + * The process still has backlog, and did not + * let either the budget timeout or the disk + * idling timeout expire. Hence it is not + * seeky, has a short thinktime and may be + * happy with a higher budget too. So + * definitely increase the budget of this good + * candidate to boost the disk throughput. + */ + budget = min(budget * 4, bfqd->bfq_max_budget); + break; + case BFQ_BFQQ_NO_MORE_REQUESTS: + /* + * For queues that expire for this reason, it + * is particularly important to keep the + * budget close to the actual service they + * need. Doing so reduces the timestamp + * misalignment problem described in the + * comments in the body of + * __bfq_activate_entity. In fact, suppose + * that a queue systematically expires for + * BFQ_BFQQ_NO_MORE_REQUESTS and presents a + * new request in time to enjoy timestamp + * back-shifting. The larger the budget of the + * queue is with respect to the service the + * queue actually requests in each service + * slot, the more times the queue can be + * reactivated with the same virtual finish + * time. It follows that, even if this finish + * time is pushed to the system virtual time + * to reduce the consequent timestamp + * misalignment, the queue unjustly enjoys for + * many re-activations a lower finish time + * than all newly activated queues. + * + * The service needed by bfqq is measured + * quite precisely by bfqq->entity.service. + * Since bfqq does not enjoy device idling, + * bfqq->entity.service is equal to the number + * of sectors that the process associated with + * bfqq requested to read/write before waiting + * for request completions, or blocking for + * other reasons. + */ + budget = max_t(int, bfqq->entity.service, min_budget); + break; + default: + return; + } + } else if (!bfq_bfqq_sync(bfqq)) + /* + * Async queues get always the maximum possible + * budget, as for them we do not care about latency + * (in addition, their ability to dispatch is limited + * by the charging factor). + */ + budget = bfqd->bfq_max_budget; + + bfqq->max_budget = budget; + + if (bfqd->budgets_assigned >= bfq_stats_min_budgets && + !bfqd->bfq_user_max_budget) + bfqq->max_budget = min(bfqq->max_budget, bfqd->bfq_max_budget); + + /* + * If there is still backlog, then assign a new budget, making + * sure that it is large enough for the next request. Since + * the finish time of bfqq must be kept in sync with the + * budget, be sure to call __bfq_bfqq_expire() *after* this + * update. + * + * If there is no backlog, then no need to update the budget; + * it will be updated on the arrival of a new request. + */ + next_rq = bfqq->next_rq; + if (next_rq) { + BUG_ON(reason == BFQ_BFQQ_TOO_IDLE || + reason == BFQ_BFQQ_NO_MORE_REQUESTS); + bfqq->entity.budget = max_t(unsigned long, bfqq->max_budget, + bfq_serv_to_charge(next_rq, bfqq)); + BUG_ON(!bfq_bfqq_busy(bfqq)); + BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list)); + } + + bfq_log_bfqq(bfqd, bfqq, "head sect: %u, new budget %d", + next_rq ? blk_rq_sectors(next_rq) : 0, + bfqq->entity.budget); +} + +/* + * Return true if the process associated with bfqq is "slow". The slow + * flag is used, in addition to the budget timeout, to reduce the + * amount of service provided to seeky processes, and thus reduce + * their chances to lower the throughput. More details in the comments + * on the function bfq_bfqq_expire(). + * + * An important observation is in order: as discussed in the comments + * on the function bfq_update_peak_rate(), with devices with internal + * queues, it is hard if ever possible to know when and for how long + * an I/O request is processed by the device (apart from the trivial + * I/O pattern where a new request is dispatched only after the + * previous one has been completed). This makes it hard to evaluate + * the real rate at which the I/O requests of each bfq_queue are + * served. In fact, for an I/O scheduler like BFQ, serving a + * bfq_queue means just dispatching its requests during its service + * slot (i.e., until the budget of the queue is exhausted, or the + * queue remains idle, or, finally, a timeout fires). But, during the + * service slot of a bfq_queue, around 100 ms at most, the device may + * be even still processing requests of bfq_queues served in previous + * service slots. On the opposite end, the requests of the in-service + * bfq_queue may be completed after the service slot of the queue + * finishes. + * + * Anyway, unless more sophisticated solutions are used + * (where possible), the sum of the sizes of the requests dispatched + * during the service slot of a bfq_queue is probably the only + * approximation available for the service received by the bfq_queue + * during its service slot. And this sum is the quantity used in this + * function to evaluate the I/O speed of a process. + */ +static bool bfq_bfqq_is_slow(struct bfq_data *bfqd, struct bfq_queue *bfqq, + bool compensate, enum bfqq_expiration reason, + unsigned long *delta_ms) +{ + ktime_t delta_ktime; + u32 delta_usecs; + bool slow = BFQQ_SEEKY(bfqq); /* if delta too short, use seekyness */ + + if (!bfq_bfqq_sync(bfqq)) + return false; + + if (compensate) + delta_ktime = bfqd->last_idling_start; + else + delta_ktime = ktime_get(); + delta_ktime = ktime_sub(delta_ktime, bfqd->last_budget_start); + delta_usecs = ktime_to_us(delta_ktime); + + /* don't use too short time intervals */ + if (delta_usecs < 1000) { + if (blk_queue_nonrot(bfqd->queue)) + /* + * give same worst-case guarantees as idling + * for seeky + */ + *delta_ms = BFQ_MIN_TT / NSEC_PER_MSEC; + else /* charge at least one seek */ + *delta_ms = bfq_slice_idle / NSEC_PER_MSEC; + + bfq_log(bfqd, "bfq_bfqq_is_slow: too short %u", delta_usecs); + + return slow; + } + + *delta_ms = delta_usecs / USEC_PER_MSEC; + + /* + * Use only long (> 20ms) intervals to filter out excessive + * spikes in service rate estimation. + */ + if (delta_usecs > 20000) { + /* + * Caveat for rotational devices: processes doing I/O + * in the slower disk zones tend to be slow(er) even + * if not seeky. In this respect, the estimated peak + * rate is likely to be an average over the disk + * surface. Accordingly, to not be too harsh with + * unlucky processes, a process is deemed slow only if + * its rate has been lower than half of the estimated + * peak rate. + */ + slow = bfqq->entity.service < bfqd->bfq_max_budget / 2; + bfq_log(bfqd, "bfq_bfqq_is_slow: relative rate %d/%d", + bfqq->entity.service, bfqd->bfq_max_budget); + } + + bfq_log_bfqq(bfqd, bfqq, "bfq_bfqq_is_slow: slow %d", slow); + + return slow; +} + +/* + * To be deemed as soft real-time, an application must meet two + * requirements. First, the application must not require an average + * bandwidth higher than the approximate bandwidth required to playback or + * record a compressed high-definition video. + * The next function is invoked on the completion of the last request of a + * batch, to compute the next-start time instant, soft_rt_next_start, such + * that, if the next request of the application does not arrive before + * soft_rt_next_start, then the above requirement on the bandwidth is met. + * + * The second requirement is that the request pattern of the application is + * isochronous, i.e., that, after issuing a request or a batch of requests, + * the application stops issuing new requests until all its pending requests + * have been completed. After that, the application may issue a new batch, + * and so on. + * For this reason the next function is invoked to compute + * soft_rt_next_start only for applications that meet this requirement, + * whereas soft_rt_next_start is set to infinity for applications that do + * not. + * + * Unfortunately, even a greedy application may happen to behave in an + * isochronous way if the CPU load is high. In fact, the application may + * stop issuing requests while the CPUs are busy serving other processes, + * then restart, then stop again for a while, and so on. In addition, if + * the disk achieves a low enough throughput with the request pattern + * issued by the application (e.g., because the request pattern is random + * and/or the device is slow), then the application may meet the above + * bandwidth requirement too. To prevent such a greedy application to be + * deemed as soft real-time, a further rule is used in the computation of + * soft_rt_next_start: soft_rt_next_start must be higher than the current + * time plus the maximum time for which the arrival of a request is waited + * for when a sync queue becomes idle, namely bfqd->bfq_slice_idle. + * This filters out greedy applications, as the latter issue instead their + * next request as soon as possible after the last one has been completed + * (in contrast, when a batch of requests is completed, a soft real-time + * application spends some time processing data). + * + * Unfortunately, the last filter may easily generate false positives if + * only bfqd->bfq_slice_idle is used as a reference time interval and one + * or both the following cases occur: + * 1) HZ is so low that the duration of a jiffy is comparable to or higher + * than bfqd->bfq_slice_idle. This happens, e.g., on slow devices with + * HZ=100. + * 2) jiffies, instead of increasing at a constant rate, may stop increasing + * for a while, then suddenly 'jump' by several units to recover the lost + * increments. This seems to happen, e.g., inside virtual machines. + * To address this issue, we do not use as a reference time interval just + * bfqd->bfq_slice_idle, but bfqd->bfq_slice_idle plus a few jiffies. In + * particular we add the minimum number of jiffies for which the filter + * seems to be quite precise also in embedded systems and KVM/QEMU virtual + * machines. + */ +static unsigned long bfq_bfqq_softrt_next_start(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + bfq_log_bfqq(bfqd, bfqq, +"softrt_next_start: service_blkg %lu soft_rate %u sects/sec interval %u", + bfqq->service_from_backlogged, + bfqd->bfq_wr_max_softrt_rate, + jiffies_to_msecs(HZ * bfqq->service_from_backlogged / + bfqd->bfq_wr_max_softrt_rate)); + + return max(bfqq->last_idle_bklogged + + HZ * bfqq->service_from_backlogged / + bfqd->bfq_wr_max_softrt_rate, + jiffies + nsecs_to_jiffies(bfqq->bfqd->bfq_slice_idle) + 4); +} + +/** + * bfq_bfqq_expire - expire a queue. + * @bfqd: device owning the queue. + * @bfqq: the queue to expire. + * @compensate: if true, compensate for the time spent idling. + * @reason: the reason causing the expiration. + * + * If the process associated with bfqq does slow I/O (e.g., because it + * issues random requests), we charge bfqq with the time it has been + * in service instead of the service it has received (see + * bfq_bfqq_charge_time for details on how this goal is achieved). As + * a consequence, bfqq will typically get higher timestamps upon + * reactivation, and hence it will be rescheduled as if it had + * received more service than what it has actually received. In the + * end, bfqq receives less service in proportion to how slowly its + * associated process consumes its budgets (and hence how seriously it + * tends to lower the throughput). In addition, this time-charging + * strategy guarantees time fairness among slow processes. In + * contrast, if the process associated with bfqq is not slow, we + * charge bfqq exactly with the service it has received. + * + * Charging time to the first type of queues and the exact service to + * the other has the effect of using the WF2Q+ policy to schedule the + * former on a timeslice basis, without violating service domain + * guarantees among the latter. + */ +static void bfq_bfqq_expire(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + bool compensate, + enum bfqq_expiration reason) +{ + bool slow; + unsigned long delta = 0; + struct bfq_entity *entity = &bfqq->entity; + int ref; + + BUG_ON(bfqq != bfqd->in_service_queue); + + /* + * Check whether the process is slow (see bfq_bfqq_is_slow). + */ + slow = bfq_bfqq_is_slow(bfqd, bfqq, compensate, reason, &delta); + + /* + * Increase service_from_backlogged before next statement, + * because the possible next invocation of + * bfq_bfqq_charge_time would likely inflate + * entity->service. In contrast, service_from_backlogged must + * contain real service, to enable the soft real-time + * heuristic to correctly compute the bandwidth consumed by + * bfqq. + */ + bfqq->service_from_backlogged += entity->service; + + /* + * As above explained, charge slow (typically seeky) and + * timed-out queues with the time and not the service + * received, to favor sequential workloads. + * + * Processes doing I/O in the slower disk zones will tend to + * be slow(er) even if not seeky. Therefore, since the + * estimated peak rate is actually an average over the disk + * surface, these processes may timeout just for bad luck. To + * avoid punishing them, do not charge time to processes that + * succeeded in consuming at least 2/3 of their budget. This + * allows BFQ to preserve enough elasticity to still perform + * bandwidth, and not time, distribution with little unlucky + * or quasi-sequential processes. + */ + if (bfqq->wr_coeff == 1 && + (slow || + (reason == BFQ_BFQQ_BUDGET_TIMEOUT && + bfq_bfqq_budget_left(bfqq) >= entity->budget / 3))) + bfq_bfqq_charge_time(bfqd, bfqq, delta); + + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + + if (reason == BFQ_BFQQ_TOO_IDLE && + entity->service <= 2 * entity->budget / 10) + bfq_clear_bfqq_IO_bound(bfqq); + + if (bfqd->low_latency && bfqq->wr_coeff == 1) + bfqq->last_wr_start_finish = jiffies; + + if (bfqd->low_latency && bfqd->bfq_wr_max_softrt_rate > 0 && + RB_EMPTY_ROOT(&bfqq->sort_list)) { + /* + * If we get here, and there are no outstanding + * requests, then the request pattern is isochronous + * (see the comments on the function + * bfq_bfqq_softrt_next_start()). Thus we can compute + * soft_rt_next_start. If, instead, the queue still + * has outstanding requests, then we have to wait for + * the completion of all the outstanding requests to + * discover whether the request pattern is actually + * isochronous. + */ + BUG_ON(bfqd->busy_queues < 1); + if (bfqq->dispatched == 0) { + bfqq->soft_rt_next_start = + bfq_bfqq_softrt_next_start(bfqd, bfqq); + bfq_log_bfqq(bfqd, bfqq, "new soft_rt_next %lu", + bfqq->soft_rt_next_start); + } else { + /* + * The application is still waiting for the + * completion of one or more requests: + * prevent it from possibly being incorrectly + * deemed as soft real-time by setting its + * soft_rt_next_start to infinity. In fact, + * without this assignment, the application + * would be incorrectly deemed as soft + * real-time if: + * 1) it issued a new request before the + * completion of all its in-flight + * requests, and + * 2) at that time, its soft_rt_next_start + * happened to be in the past. + */ + bfqq->soft_rt_next_start = + bfq_greatest_from_now(); + /* + * Schedule an update of soft_rt_next_start to when + * the task may be discovered to be isochronous. + */ + bfq_mark_bfqq_softrt_update(bfqq); + } + } + + bfq_log_bfqq(bfqd, bfqq, + "expire (%d, slow %d, num_disp %d, short_ttime %d, weight %d)", + reason, slow, bfqq->dispatched, + bfq_bfqq_has_short_ttime(bfqq), entity->weight); + + /* + * Increase, decrease or leave budget unchanged according to + * reason. + */ + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + __bfq_bfqq_recalc_budget(bfqd, bfqq, reason); + BUG_ON(bfqq->next_rq == NULL && + bfqq->entity.budget < bfqq->entity.service); + ref = bfqq->ref; + __bfq_bfqq_expire(bfqd, bfqq); + + BUG_ON(ref > 1 && + !bfq_bfqq_busy(bfqq) && reason == BFQ_BFQQ_BUDGET_EXHAUSTED && + !bfq_class_idle(bfqq)); + + /* mark bfqq as waiting a request only if a bic still points to it */ + if (ref > 1 && !bfq_bfqq_busy(bfqq) && + reason != BFQ_BFQQ_BUDGET_TIMEOUT && + reason != BFQ_BFQQ_BUDGET_EXHAUSTED) + bfq_mark_bfqq_non_blocking_wait_rq(bfqq); +} + +/* + * Budget timeout is not implemented through a dedicated timer, but + * just checked on request arrivals and completions, as well as on + * idle timer expirations. + */ +static bool bfq_bfqq_budget_timeout(struct bfq_queue *bfqq) +{ + return time_is_before_eq_jiffies(bfqq->budget_timeout); +} + +/* + * If we expire a queue that is actively waiting (i.e., with the + * device idled) for the arrival of a new request, then we may incur + * the timestamp misalignment problem described in the body of the + * function __bfq_activate_entity. Hence we return true only if this + * condition does not hold, or if the queue is slow enough to deserve + * only to be kicked off for preserving a high throughput. + */ +static bool bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq) +{ + bfq_log_bfqq(bfqq->bfqd, bfqq, + "may_budget_timeout: wait_request %d left %d timeout %d", + bfq_bfqq_wait_request(bfqq), + bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3, + bfq_bfqq_budget_timeout(bfqq)); + + return (!bfq_bfqq_wait_request(bfqq) || + bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3) + && + bfq_bfqq_budget_timeout(bfqq); +} + +/* + * For a queue that becomes empty, device idling is allowed only if + * this function returns true for that queue. As a consequence, since + * device idling plays a critical role for both throughput boosting + * and service guarantees, the return value of this function plays a + * critical role as well. + * + * In a nutshell, this function returns true only if idling is + * beneficial for throughput or, even if detrimental for throughput, + * idling is however necessary to preserve service guarantees (low + * latency, desired throughput distribution, ...). In particular, on + * NCQ-capable devices, this function tries to return false, so as to + * help keep the drives' internal queues full, whenever this helps the + * device boost the throughput without causing any service-guarantee + * issue. + * + * In more detail, the return value of this function is obtained by, + * first, computing a number of boolean variables that take into + * account throughput and service-guarantee issues, and, then, + * combining these variables in a logical expression. Most of the + * issues taken into account are not trivial. We discuss these issues + * while introducing the variables. + */ +static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq) +{ + struct bfq_data *bfqd = bfqq->bfqd; + bool rot_without_queueing = + !blk_queue_nonrot(bfqd->queue) && !bfqd->hw_tag, + bfqq_sequential_and_IO_bound, + idling_boosts_thr, idling_boosts_thr_without_issues, + idling_needed_for_service_guarantees, + asymmetric_scenario; + + if (bfqd->strict_guarantees) + return true; + + /* + * Idling is performed only if slice_idle > 0. In addition, we + * do not idle if + * (a) bfqq is async + * (b) bfqq is in the idle io prio class: in this case we do + * not idle because we want to minimize the bandwidth that + * queues in this class can steal to higher-priority queues + */ + if (bfqd->bfq_slice_idle == 0 || !bfq_bfqq_sync(bfqq) || + bfq_class_idle(bfqq)) + return false; + + bfqq_sequential_and_IO_bound = !BFQQ_SEEKY(bfqq) && + bfq_bfqq_IO_bound(bfqq) && bfq_bfqq_has_short_ttime(bfqq); + /* + * The next variable takes into account the cases where idling + * boosts the throughput. + * + * The value of the variable is computed considering, first, that + * idling is virtually always beneficial for the throughput if: + * (a) the device is not NCQ-capable and rotational, or + * (b) regardless of the presence of NCQ, the device is rotational and + * the request pattern for bfqq is I/O-bound and sequential, or + * (c) regardless of whether it is rotational, the device is + * not NCQ-capable and the request pattern for bfqq is + * I/O-bound and sequential. + * + * Secondly, and in contrast to the above item (b), idling an + * NCQ-capable flash-based device would not boost the + * throughput even with sequential I/O; rather it would lower + * the throughput in proportion to how fast the device + * is. Accordingly, the next variable is true if any of the + * above conditions (a), (b) or (c) is true, and, in + * particular, happens to be false if bfqd is an NCQ-capable + * flash-based device. + */ + idling_boosts_thr = rot_without_queueing || + ((!blk_queue_nonrot(bfqd->queue) || !bfqd->hw_tag) && + bfqq_sequential_and_IO_bound); + + /* + * The value of the next variable, + * idling_boosts_thr_without_issues, is equal to that of + * idling_boosts_thr, unless a special case holds. In this + * special case, described below, idling may cause problems to + * weight-raised queues. + * + * When the request pool is saturated (e.g., in the presence + * of write hogs), if the processes associated with + * non-weight-raised queues ask for requests at a lower rate, + * then processes associated with weight-raised queues have a + * higher probability to get a request from the pool + * immediately (or at least soon) when they need one. Thus + * they have a higher probability to actually get a fraction + * of the device throughput proportional to their high + * weight. This is especially true with NCQ-capable drives, + * which enqueue several requests in advance, and further + * reorder internally-queued requests. + * + * For this reason, we force to false the value of + * idling_boosts_thr_without_issues if there are weight-raised + * busy queues. In this case, and if bfqq is not weight-raised, + * this guarantees that the device is not idled for bfqq (if, + * instead, bfqq is weight-raised, then idling will be + * guaranteed by another variable, see below). Combined with + * the timestamping rules of BFQ (see [1] for details), this + * behavior causes bfqq, and hence any sync non-weight-raised + * queue, to get a lower number of requests served, and thus + * to ask for a lower number of requests from the request + * pool, before the busy weight-raised queues get served + * again. This often mitigates starvation problems in the + * presence of heavy write workloads and NCQ, thereby + * guaranteeing a higher application and system responsiveness + * in these hostile scenarios. + */ + idling_boosts_thr_without_issues = idling_boosts_thr && + bfqd->wr_busy_queues == 0; + + /* + * There is then a case where idling must be performed not + * for throughput concerns, but to preserve service + * guarantees. + * + * To introduce this case, we can note that allowing the drive + * to enqueue more than one request at a time, and hence + * delegating de facto final scheduling decisions to the + * drive's internal scheduler, entails loss of control on the + * actual request service order. In particular, the critical + * situation is when requests from different processes happen + * to be present, at the same time, in the internal queue(s) + * of the drive. In such a situation, the drive, by deciding + * the service order of the internally-queued requests, does + * determine also the actual throughput distribution among + * these processes. But the drive typically has no notion or + * concern about per-process throughput distribution, and + * makes its decisions only on a per-request basis. Therefore, + * the service distribution enforced by the drive's internal + * scheduler is likely to coincide with the desired + * device-throughput distribution only in a completely + * symmetric scenario where: + * (i) each of these processes must get the same throughput as + * the others; + * (ii) all these processes have the same I/O pattern + * (either sequential or random). + * In fact, in such a scenario, the drive will tend to treat + * the requests of each of these processes in about the same + * way as the requests of the others, and thus to provide + * each of these processes with about the same throughput + * (which is exactly the desired throughput distribution). In + * contrast, in any asymmetric scenario, device idling is + * certainly needed to guarantee that bfqq receives its + * assigned fraction of the device throughput (see [1] for + * details). + * + * We address this issue by controlling, actually, only the + * symmetry sub-condition (i), i.e., provided that + * sub-condition (i) holds, idling is not performed, + * regardless of whether sub-condition (ii) holds. In other + * words, only if sub-condition (i) holds, then idling is + * allowed, and the device tends to be prevented from queueing + * many requests, possibly of several processes. The reason + * for not controlling also sub-condition (ii) is that we + * exploit preemption to preserve guarantees in case of + * symmetric scenarios, even if (ii) does not hold, as + * explained in the next two paragraphs. + * + * Even if a queue, say Q, is expired when it remains idle, Q + * can still preempt the new in-service queue if the next + * request of Q arrives soon (see the comments on + * bfq_bfqq_update_budg_for_activation). If all queues and + * groups have the same weight, this form of preemption, + * combined with the hole-recovery heuristic described in the + * comments on function bfq_bfqq_update_budg_for_activation, + * are enough to preserve a correct bandwidth distribution in + * the mid term, even without idling. In fact, even if not + * idling allows the internal queues of the device to contain + * many requests, and thus to reorder requests, we can rather + * safely assume that the internal scheduler still preserves a + * minimum of mid-term fairness. The motivation for using + * preemption instead of idling is that, by not idling, + * service guarantees are preserved without minimally + * sacrificing throughput. In other words, both a high + * throughput and its desired distribution are obtained. + * + * More precisely, this preemption-based, idleless approach + * provides fairness in terms of IOPS, and not sectors per + * second. This can be seen with a simple example. Suppose + * that there are two queues with the same weight, but that + * the first queue receives requests of 8 sectors, while the + * second queue receives requests of 1024 sectors. In + * addition, suppose that each of the two queues contains at + * most one request at a time, which implies that each queue + * always remains idle after it is served. Finally, after + * remaining idle, each queue receives very quickly a new + * request. It follows that the two queues are served + * alternatively, preempting each other if needed. This + * implies that, although both queues have the same weight, + * the queue with large requests receives a service that is + * 1024/8 times as high as the service received by the other + * queue. + * + * On the other hand, device idling is performed, and thus + * pure sector-domain guarantees are provided, for the + * following queues, which are likely to need stronger + * throughput guarantees: weight-raised queues, and queues + * with a higher weight than other queues. When such queues + * are active, sub-condition (i) is false, which triggers + * device idling. + * + * According to the above considerations, the next variable is + * true (only) if sub-condition (i) holds. To compute the + * value of this variable, we not only use the return value of + * the function bfq_symmetric_scenario(), but also check + * whether bfqq is being weight-raised, because + * bfq_symmetric_scenario() does not take into account also + * weight-raised queues (see comments on + * bfq_weights_tree_add()). + * + * As a side note, it is worth considering that the above + * device-idling countermeasures may however fail in the + * following unlucky scenario: if idling is (correctly) + * disabled in a time period during which all symmetry + * sub-conditions hold, and hence the device is allowed to + * enqueue many requests, but at some later point in time some + * sub-condition stops to hold, then it may become impossible + * to let requests be served in the desired order until all + * the requests already queued in the device have been served. + */ + asymmetric_scenario = bfqq->wr_coeff > 1 || + !bfq_symmetric_scenario(bfqd); + + /* + * Finally, there is a case where maximizing throughput is the + * best choice even if it may cause unfairness toward + * bfqq. Such a case is when bfqq became active in a burst of + * queue activations. Queues that became active during a large + * burst benefit only from throughput, as discussed in the + * comments on bfq_handle_burst. Thus, if bfqq became active + * in a burst and not idling the device maximizes throughput, + * then the device must no be idled, because not idling the + * device provides bfqq and all other queues in the burst with + * maximum benefit. Combining this and the above case, we can + * now establish when idling is actually needed to preserve + * service guarantees. + */ + idling_needed_for_service_guarantees = + asymmetric_scenario && !bfq_bfqq_in_large_burst(bfqq); + + /* + * We have now all the components we need to compute the + * return value of the function, which is true only if idling + * either boosts the throughput (without issues), or is + * necessary to preserve service guarantees. + */ + bfq_log_bfqq(bfqd, bfqq, "may_idle: sync %d idling_boosts_thr %d", + bfq_bfqq_sync(bfqq), idling_boosts_thr); + + bfq_log_bfqq(bfqd, bfqq, + "may_idle: wr_busy %d boosts %d IO-bound %d guar %d", + bfqd->wr_busy_queues, + idling_boosts_thr_without_issues, + bfq_bfqq_IO_bound(bfqq), + idling_needed_for_service_guarantees); + + return idling_boosts_thr_without_issues || + idling_needed_for_service_guarantees; +} + +/* + * If the in-service queue is empty but the function bfq_bfqq_may_idle + * returns true, then: + * 1) the queue must remain in service and cannot be expired, and + * 2) the device must be idled to wait for the possible arrival of a new + * request for the queue. + * See the comments on the function bfq_bfqq_may_idle for the reasons + * why performing device idling is the best choice to boost the throughput + * and preserve service guarantees when bfq_bfqq_may_idle itself + * returns true. + */ +static bool bfq_bfqq_must_idle(struct bfq_queue *bfqq) +{ + return RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_bfqq_may_idle(bfqq); +} + +/* + * Select a queue for service. If we have a current queue in service, + * check whether to continue servicing it, or retrieve and set a new one. + */ +static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd) +{ + struct bfq_queue *bfqq; + struct request *next_rq; + enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT; + + bfqq = bfqd->in_service_queue; + if (!bfqq) + goto new_queue; + + bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue"); + + if (bfq_may_expire_for_budg_timeout(bfqq) && + !bfq_bfqq_wait_request(bfqq) && + !bfq_bfqq_must_idle(bfqq)) + goto expire; + +check_queue: + /* + * This loop is rarely executed more than once. Even when it + * happens, it is much more convenient to re-execute this loop + * than to return NULL and trigger a new dispatch to get a + * request served. + */ + next_rq = bfqq->next_rq; + /* + * If bfqq has requests queued and it has enough budget left to + * serve them, keep the queue, otherwise expire it. + */ + if (next_rq) { + BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list)); + + if (bfq_serv_to_charge(next_rq, bfqq) > + bfq_bfqq_budget_left(bfqq)) { + /* + * Expire the queue for budget exhaustion, + * which makes sure that the next budget is + * enough to serve the next request, even if + * it comes from the fifo expired path. + */ + reason = BFQ_BFQQ_BUDGET_EXHAUSTED; + goto expire; + } else { + /* + * The idle timer may be pending because we may + * not disable disk idling even when a new request + * arrives. + */ + if (bfq_bfqq_wait_request(bfqq)) { + /* + * If we get here: 1) at least a new request + * has arrived but we have not disabled the + * timer because the request was too small, + * 2) then the block layer has unplugged + * the device, causing the dispatch to be + * invoked. + * + * Since the device is unplugged, now the + * requests are probably large enough to + * provide a reasonable throughput. + * So we disable idling. + */ + bfq_clear_bfqq_wait_request(bfqq); + hrtimer_try_to_cancel(&bfqd->idle_slice_timer); + } + goto keep_queue; + } + } + + /* + * No requests pending. However, if the in-service queue is idling + * for a new request, or has requests waiting for a completion and + * may idle after their completion, then keep it anyway. + */ + if (bfq_bfqq_wait_request(bfqq) || + (bfqq->dispatched != 0 && bfq_bfqq_may_idle(bfqq))) { + bfqq = NULL; + goto keep_queue; + } + + reason = BFQ_BFQQ_NO_MORE_REQUESTS; +expire: + bfq_bfqq_expire(bfqd, bfqq, false, reason); +new_queue: + bfqq = bfq_set_in_service_queue(bfqd); + if (bfqq) { + bfq_log_bfqq(bfqd, bfqq, "select_queue: checking new queue"); + goto check_queue; + } +keep_queue: + if (bfqq) + bfq_log_bfqq(bfqd, bfqq, "select_queue: returned this queue"); + else + bfq_log(bfqd, "select_queue: no queue returned"); + + return bfqq; +} + +static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + + if (bfqq->wr_coeff > 1) { /* queue is being weight-raised */ + BUG_ON(bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time && + time_is_after_jiffies(bfqq->last_wr_start_finish)); + + bfq_log_bfqq(bfqd, bfqq, + "raising period dur %u/%u msec, old coeff %u, w %d(%d)", + jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish), + jiffies_to_msecs(bfqq->wr_cur_max_time), + bfqq->wr_coeff, + bfqq->entity.weight, bfqq->entity.orig_weight); + + BUG_ON(bfqq != bfqd->in_service_queue && entity->weight != + entity->orig_weight * bfqq->wr_coeff); + if (entity->prio_changed) + bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change"); + + /* + * If the queue was activated in a burst, or too much + * time has elapsed from the beginning of this + * weight-raising period, then end weight raising. + */ + if (bfq_bfqq_in_large_burst(bfqq)) + bfq_bfqq_end_wr(bfqq); + else if (time_is_before_jiffies(bfqq->last_wr_start_finish + + bfqq->wr_cur_max_time)) { + if (bfqq->wr_cur_max_time != bfqd->bfq_wr_rt_max_time || + time_is_before_jiffies(bfqq->wr_start_at_switch_to_srt + + bfq_wr_duration(bfqd))) + bfq_bfqq_end_wr(bfqq); + else { + switch_back_to_interactive_wr(bfqq, bfqd); + BUG_ON(time_is_after_jiffies( + bfqq->last_wr_start_finish)); + bfqq->entity.prio_changed = 1; + bfq_log_bfqq(bfqd, bfqq, + "back to interactive wr"); + } + } + } + /* + * To improve latency (for this or other queues), immediately + * update weight both if it must be raised and if it must be + * lowered. Since, entity may be on some active tree here, and + * might have a pending change of its ioprio class, invoke + * next function with the last parameter unset (see the + * comments on the function). + */ + if ((entity->weight > entity->orig_weight) != (bfqq->wr_coeff > 1)) + __bfq_entity_update_weight_prio(bfq_entity_service_tree(entity), + entity, false); +} + +/* + * Dispatch next request from bfqq. + */ +static struct request *bfq_dispatch_rq_from_bfqq(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + struct request *rq = bfqq->next_rq; + unsigned long service_to_charge; + + BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list)); + BUG_ON(!rq); + service_to_charge = bfq_serv_to_charge(rq, bfqq); + + BUG_ON(service_to_charge > bfq_bfqq_budget_left(bfqq)); + + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + + bfq_bfqq_served(bfqq, service_to_charge); + + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + + bfq_dispatch_remove(bfqd->queue, rq); + + /* + * If weight raising has to terminate for bfqq, then next + * function causes an immediate update of bfqq's weight, + * without waiting for next activation. As a consequence, on + * expiration, bfqq will be timestamped as if has never been + * weight-raised during this service slot, even if it has + * received part or even most of the service as a + * weight-raised queue. This inflates bfqq's timestamps, which + * is beneficial, as bfqq is then more willing to leave the + * device immediately to possible other weight-raised queues. + */ + bfq_update_wr_data(bfqd, bfqq); + + bfq_log_bfqq(bfqd, bfqq, + "dispatched %u sec req (%llu), budg left %d, new disp_nr %d", + blk_rq_sectors(rq), + (unsigned long long) blk_rq_pos(rq), + bfq_bfqq_budget_left(bfqq), + bfqq->dispatched); + + /* + * Expire bfqq, pretending that its budget expired, if bfqq + * belongs to CLASS_IDLE and other queues are waiting for + * service. + */ + if (bfqd->busy_queues > 1 && bfq_class_idle(bfqq)) + goto expire; + + return rq; + +expire: + bfq_bfqq_expire(bfqd, bfqq, false, BFQ_BFQQ_BUDGET_EXHAUSTED); + return rq; +} + +static bool bfq_has_work(struct blk_mq_hw_ctx *hctx) +{ + struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; + + bfq_log(bfqd, "has_work, dispatch_non_empty %d busy_queues %d", + !list_empty_careful(&bfqd->dispatch), bfqd->busy_queues > 0); + + /* + * Avoiding lock: a race on bfqd->busy_queues should cause at + * most a call to dispatch for nothing + */ + return !list_empty_careful(&bfqd->dispatch) || + bfqd->busy_queues > 0; +} + +static struct request *__bfq_dispatch_request(struct blk_mq_hw_ctx *hctx) +{ + struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; + struct request *rq = NULL; + struct bfq_queue *bfqq = NULL; + + if (!list_empty(&bfqd->dispatch)) { + rq = list_first_entry(&bfqd->dispatch, struct request, + queuelist); + list_del_init(&rq->queuelist); + rq->rq_flags &= ~RQF_DISP_LIST; + + bfq_log(bfqd, + "dispatch requests: picked %p from dispatch list", rq); + bfqq = RQ_BFQQ(rq); + + if (bfqq) { + /* + * Increment counters here, because this + * dispatch does not follow the standard + * dispatch flow (where counters are + * incremented) + */ + bfqq->dispatched++; + + /* + * TESTING: reset DISP_LIST flag, because: 1) + * this rq this request has passed through + * get_rq_private, 2) then it will have + * put_rq_private invoked on it, and 3) in + * put_rq_private we use this flag to check + * that put_rq_private is not invoked on + * requests for which get_rq_private has been + * invoked. + */ + rq->rq_flags &= ~RQF_DISP_LIST; + goto inc_in_driver_start_rq; + } + + /* + * We exploit the put_rq_private hook to decrement + * rq_in_driver, but put_rq_private will not be + * invoked on this request. So, to avoid unbalance, + * just start this request, without incrementing + * rq_in_driver. As a negative consequence, + * rq_in_driver is deceptively lower than it should be + * while this request is in service. This may cause + * bfq_schedule_dispatch to be invoked uselessly. + * + * As for implementing an exact solution, the + * put_request hook, if defined, is probably invoked + * also on this request. So, by exploiting this hook, + * we could 1) increment rq_in_driver here, and 2) + * decrement it in put_request. Such a solution would + * let the value of the counter be always accurate, + * but it would entail using an extra interface + * function. This cost seems higher than the benefit, + * being the frequency of non-elevator-private + * requests very low. + */ + goto start_rq; + } + + bfq_log(bfqd, "dispatch requests: %d busy queues", bfqd->busy_queues); + + if (bfqd->busy_queues == 0) + goto exit; + + /* + * Force device to serve one request at a time if + * strict_guarantees is true. Forcing this service scheme is + * currently the ONLY way to guarantee that the request + * service order enforced by the scheduler is respected by a + * queueing device. Otherwise the device is free even to make + * some unlucky request wait for as long as the device + * wishes. + * + * Of course, serving one request at at time may cause loss of + * throughput. + */ + if (bfqd->strict_guarantees && bfqd->rq_in_driver > 0) + goto exit; + + bfqq = bfq_select_queue(bfqd); + if (!bfqq) + goto exit; + + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + + BUG_ON(bfq_bfqq_wait_request(bfqq)); + + rq = bfq_dispatch_rq_from_bfqq(bfqd, bfqq); + + BUG_ON(bfqq->next_rq == NULL && + bfqq->entity.budget < bfqq->entity.service); + + if (rq) { + inc_in_driver_start_rq: + bfqd->rq_in_driver++; + start_rq: + rq->rq_flags |= RQF_STARTED; + if (bfqq) + bfq_log_bfqq(bfqd, bfqq, + "dispatched %s request %p, rq_in_driver %d", + bfq_bfqq_sync(bfqq) ? "sync" : "async", + rq, + bfqd->rq_in_driver); + else + bfq_log(bfqd, + "dispatched request %p from dispatch list, rq_in_driver %d", + rq, bfqd->rq_in_driver); + } else + bfq_log(bfqd, + "returned NULL request, rq_in_driver %d", + bfqd->rq_in_driver); + +exit: + return rq; +} + +static struct request *bfq_dispatch_request(struct blk_mq_hw_ctx *hctx) +{ + struct bfq_data *bfqd = hctx->queue->elevator->elevator_data; + struct request *rq; +#if defined(BFQ_GROUP_IOSCHED_ENABLED) && defined(CONFIG_DEBUG_BLK_CGROUP) + struct bfq_queue *in_serv_queue, *bfqq; + bool waiting_rq, idle_timer_disabled; +#endif + + spin_lock_irq(&bfqd->lock); + +#if defined(BFQ_GROUP_IOSCHED_ENABLED) && defined(CONFIG_DEBUG_BLK_CGROUP) + in_serv_queue = bfqd->in_service_queue; + waiting_rq = in_serv_queue && bfq_bfqq_wait_request(in_serv_queue); + + rq = __bfq_dispatch_request(hctx); + + idle_timer_disabled = + waiting_rq && !bfq_bfqq_wait_request(in_serv_queue); + +#else + rq = __bfq_dispatch_request(hctx); +#endif + spin_unlock_irq(&bfqd->lock); + +#if defined(BFQ_GROUP_IOSCHED_ENABLED) && defined(CONFIG_DEBUG_BLK_CGROUP) + bfqq = rq ? RQ_BFQQ(rq) : NULL; + if (!idle_timer_disabled && !bfqq) + return rq; + + /* + * rq and bfqq are guaranteed to exist until this function + * ends, for the following reasons. First, rq can be + * dispatched to the device, and then can be completed and + * freed, only after this function ends. Second, rq cannot be + * merged (and thus freed because of a merge) any longer, + * because it has already started. Thus rq cannot be freed + * before this function ends, and, since rq has a reference to + * bfqq, the same guarantee holds for bfqq too. + * + * In addition, the following queue lock guarantees that + * bfqq_group(bfqq) exists as well. + */ + spin_lock_irq(hctx->queue->queue_lock); + if (idle_timer_disabled) + /* + * Since the idle timer has been disabled, + * in_serv_queue contained some request when + * __bfq_dispatch_request was invoked above, which + * implies that rq was picked exactly from + * in_serv_queue. Thus in_serv_queue == bfqq, and is + * therefore guaranteed to exist because of the above + * arguments. + */ + bfqg_stats_update_idle_time(bfqq_group(in_serv_queue)); + if (bfqq) { + struct bfq_group *bfqg = bfqq_group(bfqq); + + bfqg_stats_update_avg_queue_size(bfqg); + bfqg_stats_set_start_empty_time(bfqg); + bfqg_stats_update_io_remove(bfqg, rq->cmd_flags); + } + spin_unlock_irq(hctx->queue->queue_lock); +#endif + + return rq; +} + +/* + * Task holds one reference to the queue, dropped when task exits. Each rq + * in-flight on this queue also holds a reference, dropped when rq is freed. + * + * Scheduler lock must be held here. Recall not to use bfqq after calling + * this function on it. + */ +static void bfq_put_queue(struct bfq_queue *bfqq) +{ +#ifdef BFQ_GROUP_IOSCHED_ENABLED + struct bfq_group *bfqg = bfqq_group(bfqq); +#endif + + assert_spin_locked(&bfqq->bfqd->lock); + + BUG_ON(bfqq->ref <= 0); + + if (bfqq->bfqd) + bfq_log_bfqq(bfqq->bfqd, bfqq, "put_queue: %p %d", bfqq, bfqq->ref); + + bfqq->ref--; + if (bfqq->ref) + return; + + BUG_ON(rb_first(&bfqq->sort_list)); + BUG_ON(bfqq->allocated != 0); + BUG_ON(bfqq->entity.tree); + BUG_ON(bfq_bfqq_busy(bfqq)); + + if (!hlist_unhashed(&bfqq->burst_list_node)) { + hlist_del_init(&bfqq->burst_list_node); + /* + * Decrement also burst size after the removal, if the + * process associated with bfqq is exiting, and thus + * does not contribute to the burst any longer. This + * decrement helps filter out false positives of large + * bursts, when some short-lived process (often due to + * the execution of commands by some service) happens + * to start and exit while a complex application is + * starting, and thus spawning several processes that + * do I/O (and that *must not* be treated as a large + * burst, see comments on bfq_handle_burst). + * + * In particular, the decrement is performed only if: + * 1) bfqq is not a merged queue, because, if it is, + * then this free of bfqq is not triggered by the exit + * of the process bfqq is associated with, but exactly + * by the fact that bfqq has just been merged. + * 2) burst_size is greater than 0, to handle + * unbalanced decrements. Unbalanced decrements may + * happen in te following case: bfqq is inserted into + * the current burst list--without incrementing + * bust_size--because of a split, but the current + * burst list is not the burst list bfqq belonged to + * (see comments on the case of a split in + * bfq_set_request). + */ + if (bfqq->bic && bfqq->bfqd->burst_size > 0) + bfqq->bfqd->burst_size--; + } + + if (bfqq->bfqd) + bfq_log_bfqq(bfqq->bfqd, bfqq, "put_queue: %p freed", bfqq); + + kmem_cache_free(bfq_pool, bfqq); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + bfqg_and_blkg_put(bfqg); +#endif +} + +static void bfq_put_cooperator(struct bfq_queue *bfqq) +{ + struct bfq_queue *__bfqq, *next; + + /* + * If this queue was scheduled to merge with another queue, be + * sure to drop the reference taken on that queue (and others in + * the merge chain). See bfq_setup_merge and bfq_merge_bfqqs. + */ + __bfqq = bfqq->new_bfqq; + while (__bfqq) { + if (__bfqq == bfqq) + break; + next = __bfqq->new_bfqq; + bfq_put_queue(__bfqq); + __bfqq = next; + } +} + +static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + if (bfqq == bfqd->in_service_queue) { + __bfq_bfqq_expire(bfqd, bfqq); + bfq_schedule_dispatch(bfqd); + } + + bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq, bfqq->ref); + + bfq_put_cooperator(bfqq); + + bfq_put_queue(bfqq); /* release process reference */ +} + +static void bfq_exit_icq_bfqq(struct bfq_io_cq *bic, bool is_sync) +{ + struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync); + struct bfq_data *bfqd; + + if (bfqq) + bfqd = bfqq->bfqd; /* NULL if scheduler already exited */ + + if (bfqq && bfqd) { + unsigned long flags; + + spin_lock_irqsave(&bfqd->lock, flags); + bfq_exit_bfqq(bfqd, bfqq); + bic_set_bfqq(bic, NULL, is_sync); + spin_unlock_irqrestore(&bfqd->lock, flags); + } +} + +static void bfq_exit_icq(struct io_cq *icq) +{ + struct bfq_io_cq *bic = icq_to_bic(icq); + + BUG_ON(!bic); + bfq_exit_icq_bfqq(bic, true); + bfq_exit_icq_bfqq(bic, false); +} + +/* + * Update the entity prio values; note that the new values will not + * be used until the next (re)activation. + */ +static void bfq_set_next_ioprio_data(struct bfq_queue *bfqq, + struct bfq_io_cq *bic) +{ + struct task_struct *tsk = current; + int ioprio_class; + struct bfq_data *bfqd = bfqq->bfqd; + + WARN_ON(!bfqd); + if (!bfqd) + return; + + ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio); + switch (ioprio_class) { + default: + dev_err(bfqq->bfqd->queue->backing_dev_info->dev, + "bfq: bad prio class %d\n", ioprio_class); + case IOPRIO_CLASS_NONE: + /* + * No prio set, inherit CPU scheduling settings. + */ + bfqq->new_ioprio = task_nice_ioprio(tsk); + bfqq->new_ioprio_class = task_nice_ioclass(tsk); + break; + case IOPRIO_CLASS_RT: + bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio); + bfqq->new_ioprio_class = IOPRIO_CLASS_RT; + break; + case IOPRIO_CLASS_BE: + bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio); + bfqq->new_ioprio_class = IOPRIO_CLASS_BE; + break; + case IOPRIO_CLASS_IDLE: + bfqq->new_ioprio_class = IOPRIO_CLASS_IDLE; + bfqq->new_ioprio = 7; + break; + } + + if (bfqq->new_ioprio >= IOPRIO_BE_NR) { + pr_crit("bfq_set_next_ioprio_data: new_ioprio %d\n", + bfqq->new_ioprio); + BUG(); + } + + bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->new_ioprio); + bfqq->entity.prio_changed = 1; + bfq_log_bfqq(bfqq->bfqd, bfqq, + "set_next_ioprio_data: bic_class %d prio %d class %d", + ioprio_class, bfqq->new_ioprio, bfqq->new_ioprio_class); +} + +static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio) +{ + struct bfq_data *bfqd = bic_to_bfqd(bic); + struct bfq_queue *bfqq; + unsigned long uninitialized_var(flags); + int ioprio = bic->icq.ioc->ioprio; + + /* + * This condition may trigger on a newly created bic, be sure to + * drop the lock before returning. + */ + if (unlikely(!bfqd) || likely(bic->ioprio == ioprio)) + return; + + bic->ioprio = ioprio; + + bfqq = bic_to_bfqq(bic, false); + if (bfqq) { + /* release process reference on this queue */ + bfq_put_queue(bfqq); + bfqq = bfq_get_queue(bfqd, bio, BLK_RW_ASYNC, bic); + bic_set_bfqq(bic, bfqq, false); + bfq_log_bfqq(bfqd, bfqq, + "check_ioprio_change: bfqq %p %d", + bfqq, bfqq->ref); + } + + bfqq = bic_to_bfqq(bic, true); + if (bfqq) + bfq_set_next_ioprio_data(bfqq, bic); +} + +static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, + struct bfq_io_cq *bic, pid_t pid, int is_sync) +{ + RB_CLEAR_NODE(&bfqq->entity.rb_node); + INIT_LIST_HEAD(&bfqq->fifo); + INIT_HLIST_NODE(&bfqq->burst_list_node); + BUG_ON(!hlist_unhashed(&bfqq->burst_list_node)); + + bfqq->ref = 0; + bfqq->bfqd = bfqd; + + if (bic) + bfq_set_next_ioprio_data(bfqq, bic); + + if (is_sync) { + /* + * No need to mark as has_short_ttime if in + * idle_class, because no device idling is performed + * for queues in idle class + */ + if (!bfq_class_idle(bfqq)) + /* tentatively mark as has_short_ttime */ + bfq_mark_bfqq_has_short_ttime(bfqq); + bfq_mark_bfqq_sync(bfqq); + bfq_mark_bfqq_just_created(bfqq); + } else + bfq_clear_bfqq_sync(bfqq); + + bfqq->ttime.last_end_request = ktime_get_ns() - (1ULL<<32); + + bfq_mark_bfqq_IO_bound(bfqq); + + /* Tentative initial value to trade off between thr and lat */ + bfqq->max_budget = (2 * bfq_max_budget(bfqd)) / 3; + bfqq->pid = pid; + + bfqq->wr_coeff = 1; + bfqq->last_wr_start_finish = jiffies; + bfqq->wr_start_at_switch_to_srt = bfq_smallest_from_now(); + bfqq->budget_timeout = bfq_smallest_from_now(); + bfqq->split_time = bfq_smallest_from_now(); + + /* + * Set to the value for which bfqq will not be deemed as + * soft rt when it becomes backlogged. + */ + bfqq->soft_rt_next_start = bfq_greatest_from_now(); + + /* first request is almost certainly seeky */ + bfqq->seek_history = 1; +} + +static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd, + struct bfq_group *bfqg, + int ioprio_class, int ioprio) +{ + switch (ioprio_class) { + case IOPRIO_CLASS_RT: + return &bfqg->async_bfqq[0][ioprio]; + case IOPRIO_CLASS_NONE: + ioprio = IOPRIO_NORM; + /* fall through */ + case IOPRIO_CLASS_BE: + return &bfqg->async_bfqq[1][ioprio]; + case IOPRIO_CLASS_IDLE: + return &bfqg->async_idle_bfqq; + default: + BUG(); + } +} + +static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd, + struct bio *bio, bool is_sync, + struct bfq_io_cq *bic) +{ + const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio); + const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio); + struct bfq_queue **async_bfqq = NULL; + struct bfq_queue *bfqq; + struct bfq_group *bfqg; + + rcu_read_lock(); + + bfqg = bfq_find_set_group(bfqd, bio_blkcg(bio)); + if (!bfqg) { + bfqq = &bfqd->oom_bfqq; + goto out; + } + + if (!is_sync) { + async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class, + ioprio); + bfqq = *async_bfqq; + if (bfqq) + goto out; + } + + bfqq = kmem_cache_alloc_node(bfq_pool, + GFP_NOWAIT | __GFP_ZERO | __GFP_NOWARN, + bfqd->queue->node); + + if (bfqq) { + bfq_init_bfqq(bfqd, bfqq, bic, current->pid, + is_sync); + bfq_init_entity(&bfqq->entity, bfqg); + bfq_log_bfqq(bfqd, bfqq, "allocated"); + } else { + bfqq = &bfqd->oom_bfqq; + bfq_log_bfqq(bfqd, bfqq, "using oom bfqq"); + goto out; + } + + /* + * Pin the queue now that it's allocated, scheduler exit will + * prune it. + */ + if (async_bfqq) { + bfqq->ref++; /* + * Extra group reference, w.r.t. sync + * queue. This extra reference is removed + * only if bfqq->bfqg disappears, to + * guarantee that this queue is not freed + * until its group goes away. + */ + bfq_log_bfqq(bfqd, bfqq, "get_queue, bfqq not in async: %p, %d", + bfqq, bfqq->ref); + *async_bfqq = bfqq; + } + +out: + bfqq->ref++; /* get a process reference to this queue */ + bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq, bfqq->ref); + rcu_read_unlock(); + return bfqq; +} + +static void bfq_update_io_thinktime(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + struct bfq_ttime *ttime = &bfqq->ttime; + u64 elapsed = ktime_get_ns() - bfqq->ttime.last_end_request; + + elapsed = min_t(u64, elapsed, 2 * bfqd->bfq_slice_idle); + + ttime->ttime_samples = (7*bfqq->ttime.ttime_samples + 256) / 8; + ttime->ttime_total = div_u64(7*ttime->ttime_total + 256*elapsed, 8); + ttime->ttime_mean = div64_ul(ttime->ttime_total + 128, + ttime->ttime_samples); +} + +static void +bfq_update_io_seektime(struct bfq_data *bfqd, struct bfq_queue *bfqq, + struct request *rq) +{ + bfqq->seek_history <<= 1; + bfqq->seek_history |= + get_sdist(bfqq->last_request_pos, rq) > BFQQ_SEEK_THR && + (!blk_queue_nonrot(bfqd->queue) || + blk_rq_sectors(rq) < BFQQ_SECT_THR_NONROT); +} + +static void bfq_update_has_short_ttime(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + struct bfq_io_cq *bic) +{ + bool has_short_ttime = true; + + /* + * No need to update has_short_ttime if bfqq is async or in + * idle io prio class, or if bfq_slice_idle is zero, because + * no device idling is performed for bfqq in this case. + */ + if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq) || + bfqd->bfq_slice_idle == 0) + return; + + /* Idle window just restored, statistics are meaningless. */ + if (time_is_after_eq_jiffies(bfqq->split_time + + bfqd->bfq_wr_min_idle_time)) + return; + + /* Think time is infinite if no process is linked to + * bfqq. Otherwise check average think time to + * decide whether to mark as has_short_ttime + */ + if (atomic_read(&bic->icq.ioc->active_ref) == 0 || + (bfq_sample_valid(bfqq->ttime.ttime_samples) && + bfqq->ttime.ttime_mean > bfqd->bfq_slice_idle)) + has_short_ttime = false; + + bfq_log_bfqq(bfqd, bfqq, "update_has_short_ttime: has_short_ttime %d", + has_short_ttime); + + if (has_short_ttime) + bfq_mark_bfqq_has_short_ttime(bfqq); + else + bfq_clear_bfqq_has_short_ttime(bfqq); +} + +/* + * Called when a new fs request (rq) is added to bfqq. Check if there's + * something we should do about it. + */ +static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq, + struct request *rq) +{ + struct bfq_io_cq *bic = RQ_BIC(rq); + + if (rq->cmd_flags & REQ_META) + bfqq->meta_pending++; + + bfq_update_io_thinktime(bfqd, bfqq); + bfq_update_has_short_ttime(bfqd, bfqq, bic); + bfq_update_io_seektime(bfqd, bfqq, rq); + + bfq_log_bfqq(bfqd, bfqq, + "rq_enqueued: has_short_ttime=%d (seeky %d)", + bfq_bfqq_has_short_ttime(bfqq), BFQQ_SEEKY(bfqq)); + + bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq); + + if (bfqq == bfqd->in_service_queue && bfq_bfqq_wait_request(bfqq)) { + bool small_req = bfqq->queued[rq_is_sync(rq)] == 1 && + blk_rq_sectors(rq) < 32; + bool budget_timeout = bfq_bfqq_budget_timeout(bfqq); + + /* + * There is just this request queued: if the request + * is small and the queue is not to be expired, then + * just exit. + * + * In this way, if the device is being idled to wait + * for a new request from the in-service queue, we + * avoid unplugging the device and committing the + * device to serve just a small request. On the + * contrary, we wait for the block layer to decide + * when to unplug the device: hopefully, new requests + * will be merged to this one quickly, then the device + * will be unplugged and larger requests will be + * dispatched. + */ + if (small_req && !budget_timeout) + return; + + /* + * A large enough request arrived, or the queue is to + * be expired: in both cases disk idling is to be + * stopped, so clear wait_request flag and reset + * timer. + */ + bfq_clear_bfqq_wait_request(bfqq); + hrtimer_try_to_cancel(&bfqd->idle_slice_timer); + + /* + * The queue is not empty, because a new request just + * arrived. Hence we can safely expire the queue, in + * case of budget timeout, without risking that the + * timestamps of the queue are not updated correctly. + * See [1] for more details. + */ + if (budget_timeout) + bfq_bfqq_expire(bfqd, bfqq, false, + BFQ_BFQQ_BUDGET_TIMEOUT); + } +} + +/* returns true if it causes the idle timer to be disabled */ +static bool __bfq_insert_request(struct bfq_data *bfqd, struct request *rq) +{ + struct bfq_queue *bfqq = RQ_BFQQ(rq), *new_bfqq; + bool waiting, idle_timer_disabled = false; + BUG_ON(!bfqq); + + assert_spin_locked(&bfqd->lock); + + bfq_log_bfqq(bfqd, bfqq, "__insert_req: rq %p bfqq %p", rq, bfqq); + + /* + * An unplug may trigger a requeue of a request from the device + * driver: make sure we are in process context while trying to + * merge two bfq_queues. + */ + if (!in_interrupt()) { + new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true); + if (new_bfqq) { + if (bic_to_bfqq(RQ_BIC(rq), 1) != bfqq) + new_bfqq = bic_to_bfqq(RQ_BIC(rq), 1); + /* + * Release the request's reference to the old bfqq + * and make sure one is taken to the shared queue. + */ + new_bfqq->allocated++; + bfqq->allocated--; + bfq_log_bfqq(bfqd, bfqq, + "insert_request: new allocated %d", bfqq->allocated); + bfq_log_bfqq(bfqd, new_bfqq, + "insert_request: new_bfqq new allocated %d", + bfqq->allocated); + + new_bfqq->ref++; + /* + * If the bic associated with the process + * issuing this request still points to bfqq + * (and thus has not been already redirected + * to new_bfqq or even some other bfq_queue), + * then complete the merge and redirect it to + * new_bfqq. + */ + if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq) + bfq_merge_bfqqs(bfqd, RQ_BIC(rq), + bfqq, new_bfqq); + + bfq_clear_bfqq_just_created(bfqq); + /* + * rq is about to be enqueued into new_bfqq, + * release rq reference on bfqq + */ + bfq_put_queue(bfqq); + rq->elv.priv[1] = new_bfqq; + bfqq = new_bfqq; + } + } + + waiting = bfqq && bfq_bfqq_wait_request(bfqq); + bfq_add_request(rq); + idle_timer_disabled = waiting && !bfq_bfqq_wait_request(bfqq); + + rq->fifo_time = ktime_get_ns() + bfqd->bfq_fifo_expire[rq_is_sync(rq)]; + list_add_tail(&rq->queuelist, &bfqq->fifo); + + bfq_rq_enqueued(bfqd, bfqq, rq); + + return idle_timer_disabled; +} + +static void bfq_insert_request(struct blk_mq_hw_ctx *hctx, struct request *rq, + bool at_head) +{ + struct request_queue *q = hctx->queue; + struct bfq_data *bfqd = q->elevator->elevator_data; + struct bfq_queue *bfqq = RQ_BFQQ(rq); +#if defined(BFQ_GROUP_IOSCHED_ENABLED) && defined(CONFIG_DEBUG_BLK_CGROUP) + bool idle_timer_disabled = false; + unsigned int cmd_flags; +#endif + + spin_lock_irq(&bfqd->lock); + if (blk_mq_sched_try_insert_merge(q, rq)) { + spin_unlock_irq(&bfqd->lock); + return; + } + + spin_unlock_irq(&bfqd->lock); + + blk_mq_sched_request_inserted(rq); + + spin_lock_irq(&bfqd->lock); + if (at_head || blk_rq_is_passthrough(rq)) { + if (at_head) + list_add(&rq->queuelist, &bfqd->dispatch); + else + list_add_tail(&rq->queuelist, &bfqd->dispatch); + + rq->rq_flags |= RQF_DISP_LIST; + if (bfqq) + bfq_log_bfqq(bfqd, bfqq, + "insert_request %p in disp: at_head %d", + rq, at_head); + else + bfq_log(bfqd, + "insert_request %p in disp: at_head %d", + rq, at_head); + } else { + BUG_ON(!(rq->rq_flags & RQF_GOT)); + rq->rq_flags &= ~RQF_GOT; + +#if defined(BFQ_GROUP_IOSCHED_ENABLED) && defined(CONFIG_DEBUG_BLK_CGROUP) + idle_timer_disabled = __bfq_insert_request(bfqd, rq); + /* + * Update bfqq, because, if a queue merge has occurred + * in __bfq_insert_request, then rq has been + * redirected into a new queue. + */ + bfqq = RQ_BFQQ(rq); +#else + __bfq_insert_request(bfqd, rq); +#endif + + if (rq_mergeable(rq)) { + elv_rqhash_add(q, rq); + if (!q->last_merge) + q->last_merge = rq; + } + } +#if defined(BFQ_GROUP_IOSCHED_ENABLED) && defined(CONFIG_DEBUG_BLK_CGROUP) + /* + * Cache cmd_flags before releasing scheduler lock, because rq + * may disappear afterwards (for example, because of a request + * merge). + */ + cmd_flags = rq->cmd_flags; +#endif + spin_unlock_irq(&bfqd->lock); +#if defined(BFQ_GROUP_IOSCHED_ENABLED) && defined(CONFIG_DEBUG_BLK_CGROUP) + if (!bfqq) + return; + /* + * bfqq still exists, because it can disappear only after + * either it is merged with another queue, or the process it + * is associated with exits. But both actions must be taken by + * the same process currently executing this flow of + * instruction. + * + * In addition, the following queue lock guarantees that + * bfqq_group(bfqq) exists as well. + */ + spin_lock_irq(q->queue_lock); + bfqg_stats_update_io_add(bfqq_group(bfqq), bfqq, cmd_flags); + if (idle_timer_disabled) + bfqg_stats_update_idle_time(bfqq_group(bfqq)); + spin_unlock_irq(q->queue_lock); +#endif +} + +static void bfq_insert_requests(struct blk_mq_hw_ctx *hctx, + struct list_head *list, bool at_head) +{ + while (!list_empty(list)) { + struct request *rq; + + rq = list_first_entry(list, struct request, queuelist); + list_del_init(&rq->queuelist); + bfq_insert_request(hctx, rq, at_head); + } +} + +static void bfq_update_hw_tag(struct bfq_data *bfqd) +{ + bfqd->max_rq_in_driver = max_t(int, bfqd->max_rq_in_driver, + bfqd->rq_in_driver); + + if (bfqd->hw_tag == 1) + return; + + /* + * This sample is valid if the number of outstanding requests + * is large enough to allow a queueing behavior. Note that the + * sum is not exact, as it's not taking into account deactivated + * requests. + */ + if (bfqd->rq_in_driver + bfqd->queued < BFQ_HW_QUEUE_THRESHOLD) + return; + + if (bfqd->hw_tag_samples++ < BFQ_HW_QUEUE_SAMPLES) + return; + + bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD; + bfqd->max_rq_in_driver = 0; + bfqd->hw_tag_samples = 0; +} + +static void bfq_completed_request(struct bfq_queue *bfqq, struct bfq_data *bfqd) +{ + u64 now_ns; + u32 delta_us; + + bfq_update_hw_tag(bfqd); + + BUG_ON(!bfqd->rq_in_driver); + BUG_ON(!bfqq->dispatched); + bfqd->rq_in_driver--; + bfqq->dispatched--; + + bfq_log_bfqq(bfqd, bfqq, + "completed_requests: new disp %d, new rq_in_driver %d", + bfqq->dispatched, bfqd->rq_in_driver); + + if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) { + BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list)); + /* + * Set budget_timeout (which we overload to store the + * time at which the queue remains with no backlog and + * no outstanding request; used by the weight-raising + * mechanism). + */ + bfqq->budget_timeout = jiffies; + + bfq_weights_tree_remove(bfqd, &bfqq->entity, + &bfqd->queue_weights_tree); + } + + now_ns = ktime_get_ns(); + + bfqq->ttime.last_end_request = now_ns; + + /* + * Using us instead of ns, to get a reasonable precision in + * computing rate in next check. + */ + delta_us = div_u64(now_ns - bfqd->last_completion, NSEC_PER_USEC); + + bfq_log_bfqq(bfqd, bfqq, + "rq_completed: delta %uus/%luus max_size %u rate %llu/%llu", + delta_us, BFQ_MIN_TT/NSEC_PER_USEC, bfqd->last_rq_max_size, + (USEC_PER_SEC* + (u64)((bfqd->last_rq_max_size<>BFQ_RATE_SHIFT, + (USEC_PER_SEC*(u64)(1UL<<(BFQ_RATE_SHIFT-10)))>>BFQ_RATE_SHIFT); + + /* + * If the request took rather long to complete, and, according + * to the maximum request size recorded, this completion latency + * implies that the request was certainly served at a very low + * rate (less than 1M sectors/sec), then the whole observation + * interval that lasts up to this time instant cannot be a + * valid time interval for computing a new peak rate. Invoke + * bfq_update_rate_reset to have the following three steps + * taken: + * - close the observation interval at the last (previous) + * request dispatch or completion + * - compute rate, if possible, for that observation interval + * - reset to zero samples, which will trigger a proper + * re-initialization of the observation interval on next + * dispatch + */ + if (delta_us > BFQ_MIN_TT/NSEC_PER_USEC && + (bfqd->last_rq_max_size<last_completion = now_ns; + + /* + * If we are waiting to discover whether the request pattern + * of the task associated with the queue is actually + * isochronous, and both requisites for this condition to hold + * are now satisfied, then compute soft_rt_next_start (see the + * comments on the function bfq_bfqq_softrt_next_start()). We + * schedule this delayed check when bfqq expires, if it still + * has in-flight requests. + */ + if (bfq_bfqq_softrt_update(bfqq) && bfqq->dispatched == 0 && + RB_EMPTY_ROOT(&bfqq->sort_list)) + bfqq->soft_rt_next_start = + bfq_bfqq_softrt_next_start(bfqd, bfqq); + + /* + * If this is the in-service queue, check if it needs to be expired, + * or if we want to idle in case it has no pending requests. + */ + if (bfqd->in_service_queue == bfqq) { + if (bfqq->dispatched == 0 && bfq_bfqq_must_idle(bfqq)) { + bfq_arm_slice_timer(bfqd); + return; + } else if (bfq_may_expire_for_budg_timeout(bfqq)) + bfq_bfqq_expire(bfqd, bfqq, false, + BFQ_BFQQ_BUDGET_TIMEOUT); + else if (RB_EMPTY_ROOT(&bfqq->sort_list) && + (bfqq->dispatched == 0 || + !bfq_bfqq_may_idle(bfqq))) + bfq_bfqq_expire(bfqd, bfqq, false, + BFQ_BFQQ_NO_MORE_REQUESTS); + } +} + +static void bfq_put_rq_priv_body(struct bfq_queue *bfqq) +{ + bfq_log_bfqq(bfqq->bfqd, bfqq, + "put_request_body: allocated %d", bfqq->allocated); + BUG_ON(!bfqq->allocated); + bfqq->allocated--; + + bfq_put_queue(bfqq); +} + +static void bfq_finish_request(struct request *rq) +{ + struct bfq_queue *bfqq; + struct bfq_data *bfqd; + struct bfq_io_cq *bic; + + BUG_ON(!rq); + + if (!rq->elv.icq) + return; + + bfqq = RQ_BFQQ(rq); + BUG_ON(!bfqq); + + bic = RQ_BIC(rq); + BUG_ON(!bic); + + bfqd = bfqq->bfqd; + BUG_ON(!bfqd); + + if (rq->rq_flags & RQF_DISP_LIST) { + pr_crit("putting disp rq %p for %d", rq, bfqq->pid); + BUG(); + } + BUG_ON(rq->rq_flags & RQF_QUEUED); + BUG_ON(!(rq->rq_flags & RQF_ELVPRIV)); + + bfq_log_bfqq(bfqd, bfqq, + "putting rq %p with %u sects left, STARTED %d", + rq, blk_rq_sectors(rq), + rq->rq_flags & RQF_STARTED); + + if (rq->rq_flags & RQF_STARTED) + bfqg_stats_update_completion(bfqq_group(bfqq), + rq_start_time_ns(rq), + rq_io_start_time_ns(rq), + rq->cmd_flags); + + WARN_ON(blk_rq_sectors(rq) == 0 && !(rq->rq_flags & RQF_STARTED)); + + if (likely(rq->rq_flags & RQF_STARTED)) { + unsigned long flags; + + spin_lock_irqsave(&bfqd->lock, flags); + + bfq_completed_request(bfqq, bfqd); + bfq_put_rq_priv_body(bfqq); + + spin_unlock_irqrestore(&bfqd->lock, flags); + } else { + /* + * Request rq may be still/already in the scheduler, + * in which case we need to remove it. And we cannot + * defer such a check and removal, to avoid + * inconsistencies in the time interval from the end + * of this function to the start of the deferred work. + * This situation seems to occur only in process + * context, as a consequence of a merge. In the + * current version of the code, this implies that the + * lock is held. + */ + BUG_ON(in_interrupt()); + + assert_spin_locked(&bfqd->lock); + if (!RB_EMPTY_NODE(&rq->rb_node)) { + bfq_remove_request(rq->q, rq); + bfqg_stats_update_io_remove(bfqq_group(bfqq), + rq->cmd_flags); + } + bfq_put_rq_priv_body(bfqq); + } + + rq->elv.priv[0] = NULL; + rq->elv.priv[1] = NULL; +} + +/* + * Returns NULL if a new bfqq should be allocated, or the old bfqq if this + * was the last process referring to that bfqq. + */ +static struct bfq_queue * +bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq) +{ + bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue"); + + if (bfqq_process_refs(bfqq) == 1) { + bfqq->pid = current->pid; + bfq_clear_bfqq_coop(bfqq); + bfq_clear_bfqq_split_coop(bfqq); + return bfqq; + } + + bic_set_bfqq(bic, NULL, 1); + + bfq_put_cooperator(bfqq); + + bfq_put_queue(bfqq); + return NULL; +} + +static struct bfq_queue *bfq_get_bfqq_handle_split(struct bfq_data *bfqd, + struct bfq_io_cq *bic, + struct bio *bio, + bool split, bool is_sync, + bool *new_queue) +{ + struct bfq_queue *bfqq = bic_to_bfqq(bic, is_sync); + + if (likely(bfqq && bfqq != &bfqd->oom_bfqq)) + return bfqq; + + if (new_queue) + *new_queue = true; + + if (bfqq) + bfq_put_queue(bfqq); + bfqq = bfq_get_queue(bfqd, bio, is_sync, bic); + BUG_ON(!hlist_unhashed(&bfqq->burst_list_node)); + + bic_set_bfqq(bic, bfqq, is_sync); + if (split && is_sync) { + bfq_log_bfqq(bfqd, bfqq, + "get_request: was_in_list %d " + "was_in_large_burst %d " + "large burst in progress %d", + bic->was_in_burst_list, + bic->saved_in_large_burst, + bfqd->large_burst); + + if ((bic->was_in_burst_list && bfqd->large_burst) || + bic->saved_in_large_burst) { + bfq_log_bfqq(bfqd, bfqq, + "get_request: marking in " + "large burst"); + bfq_mark_bfqq_in_large_burst(bfqq); + } else { + bfq_log_bfqq(bfqd, bfqq, + "get_request: clearing in " + "large burst"); + bfq_clear_bfqq_in_large_burst(bfqq); + if (bic->was_in_burst_list) + /* + * If bfqq was in the current + * burst list before being + * merged, then we have to add + * it back. And we do not need + * to increase burst_size, as + * we did not decrement + * burst_size when we removed + * bfqq from the burst list as + * a consequence of a merge + * (see comments in + * bfq_put_queue). In this + * respect, it would be rather + * costly to know whether the + * current burst list is still + * the same burst list from + * which bfqq was removed on + * the merge. To avoid this + * cost, if bfqq was in a + * burst list, then we add + * bfqq to the current burst + * list without any further + * check. This can cause + * inappropriate insertions, + * but rarely enough to not + * harm the detection of large + * bursts significantly. + */ + hlist_add_head(&bfqq->burst_list_node, + &bfqd->burst_list); + } + bfqq->split_time = jiffies; + } + + return bfqq; +} + +/* + * Allocate bfq data structures associated with this request. + */ +static void bfq_prepare_request(struct request *rq, struct bio *bio) +{ + struct request_queue *q = rq->q; + struct bfq_data *bfqd = q->elevator->elevator_data; + struct bfq_io_cq *bic; + const int is_sync = rq_is_sync(rq); + struct bfq_queue *bfqq; + bool bfqq_already_existing = false, split = false; + bool new_queue = false; + + if (!rq->elv.icq) + return; + bic = icq_to_bic(rq->elv.icq); + + spin_lock_irq(&bfqd->lock); + + bfq_check_ioprio_change(bic, bio); + + bfq_bic_update_cgroup(bic, bio); + + bfqq = bfq_get_bfqq_handle_split(bfqd, bic, bio, false, is_sync, + &new_queue); + + if (likely(!new_queue)) { + /* If the queue was seeky for too long, break it apart. */ + if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) { + BUG_ON(!is_sync); + bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq"); + + /* Update bic before losing reference to bfqq */ + if (bfq_bfqq_in_large_burst(bfqq)) + bic->saved_in_large_burst = true; + + bfqq = bfq_split_bfqq(bic, bfqq); + split = true; + + if (!bfqq) + bfqq = bfq_get_bfqq_handle_split(bfqd, bic, bio, + true, is_sync, + NULL); + else + bfqq_already_existing = true; + + BUG_ON(!bfqq); + BUG_ON(bfqq == &bfqd->oom_bfqq); + } + } + + bfqq->allocated++; + bfq_log_bfqq(bfqq->bfqd, bfqq, + "get_request: new allocated %d", bfqq->allocated); + + bfqq->ref++; + bfq_log_bfqq(bfqd, bfqq, "get_request %p: bfqq %p, %d", rq, bfqq, bfqq->ref); + + rq->elv.priv[0] = bic; + rq->elv.priv[1] = bfqq; + rq->rq_flags &= ~RQF_DISP_LIST; + + /* + * If a bfq_queue has only one process reference, it is owned + * by only this bic: we can then set bfqq->bic = bic. in + * addition, if the queue has also just been split, we have to + * resume its state. + */ + if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) { + bfqq->bic = bic; + if (split) { + /* + * The queue has just been split from a shared + * queue: restore the idle window and the + * possible weight raising period. + */ + bfq_bfqq_resume_state(bfqq, bfqd, bic, + bfqq_already_existing); + } + } + + if (unlikely(bfq_bfqq_just_created(bfqq))) + bfq_handle_burst(bfqd, bfqq); + + rq->rq_flags |= RQF_GOT; + spin_unlock_irq(&bfqd->lock); +} + +static void bfq_idle_slice_timer_body(struct bfq_queue *bfqq) +{ + struct bfq_data *bfqd = bfqq->bfqd; + enum bfqq_expiration reason; + unsigned long flags; + + BUG_ON(!bfqd); + spin_lock_irqsave(&bfqd->lock, flags); + + bfq_log_bfqq(bfqd, bfqq, "handling slice_timer expiration"); + bfq_clear_bfqq_wait_request(bfqq); + + if (bfqq != bfqd->in_service_queue) { + spin_unlock_irqrestore(&bfqd->lock, flags); + return; + } + + if (bfq_bfqq_budget_timeout(bfqq)) + /* + * Also here the queue can be safely expired + * for budget timeout without wasting + * guarantees + */ + reason = BFQ_BFQQ_BUDGET_TIMEOUT; + else if (bfqq->queued[0] == 0 && bfqq->queued[1] == 0) + /* + * The queue may not be empty upon timer expiration, + * because we may not disable the timer when the + * first request of the in-service queue arrives + * during disk idling. + */ + reason = BFQ_BFQQ_TOO_IDLE; + else + goto schedule_dispatch; + + bfq_bfqq_expire(bfqd, bfqq, true, reason); + +schedule_dispatch: + spin_unlock_irqrestore(&bfqd->lock, flags); + bfq_schedule_dispatch(bfqd); +} + +/* + * Handler of the expiration of the timer running if the in-service queue + * is idling inside its time slice. + */ +static enum hrtimer_restart bfq_idle_slice_timer(struct hrtimer *timer) +{ + struct bfq_data *bfqd = container_of(timer, struct bfq_data, + idle_slice_timer); + struct bfq_queue *bfqq = bfqd->in_service_queue; + + bfq_log(bfqd, "slice_timer expired"); + + /* + * Theoretical race here: the in-service queue can be NULL or + * different from the queue that was idling if a new request + * arrives for the current queue and there is a full dispatch + * cycle that changes the in-service queue. This can hardly + * happen, but in the worst case we just expire a queue too + * early. + */ + if (bfqq) + bfq_idle_slice_timer_body(bfqq); + + return HRTIMER_NORESTART; +} + +static void __bfq_put_async_bfqq(struct bfq_data *bfqd, + struct bfq_queue **bfqq_ptr) +{ + struct bfq_group *root_group = bfqd->root_group; + struct bfq_queue *bfqq = *bfqq_ptr; + + bfq_log(bfqd, "put_async_bfqq: %p", bfqq); + if (bfqq) { + bfq_bfqq_move(bfqd, bfqq, root_group); + bfq_log_bfqq(bfqd, bfqq, "put_async_bfqq: putting %p, %d", + bfqq, bfqq->ref); + bfq_put_queue(bfqq); + *bfqq_ptr = NULL; + } +} + +/* + * Release all the bfqg references to its async queues. If we are + * deallocating the group these queues may still contain requests, so + * we reparent them to the root cgroup (i.e., the only one that will + * exist for sure until all the requests on a device are gone). + */ +static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg) +{ + int i, j; + + for (i = 0; i < 2; i++) + for (j = 0; j < IOPRIO_BE_NR; j++) + __bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j]); + + __bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq); +} + +static void bfq_exit_queue(struct elevator_queue *e) +{ + struct bfq_data *bfqd = e->elevator_data; + struct bfq_queue *bfqq, *n; + + bfq_log(bfqd, "exit_queue: starting ..."); + + hrtimer_cancel(&bfqd->idle_slice_timer); + + BUG_ON(bfqd->in_service_queue); + BUG_ON(!list_empty(&bfqd->active_list)); + + spin_lock_irq(&bfqd->lock); + list_for_each_entry_safe(bfqq, n, &bfqd->idle_list, bfqq_list) + bfq_deactivate_bfqq(bfqd, bfqq, false, false); + spin_unlock_irq(&bfqd->lock); + + hrtimer_cancel(&bfqd->idle_slice_timer); + + BUG_ON(hrtimer_active(&bfqd->idle_slice_timer)); + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + blkcg_deactivate_policy(bfqd->queue, &blkcg_policy_bfq); +#else + spin_lock_irq(&bfqd->lock); + bfq_put_async_queues(bfqd, bfqd->root_group); + kfree(bfqd->root_group); + spin_unlock_irq(&bfqd->lock); +#endif + + bfq_log(bfqd, "exit_queue: finished ..."); + kfree(bfqd); +} + +static void bfq_init_root_group(struct bfq_group *root_group, + struct bfq_data *bfqd) +{ + int i; + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + root_group->entity.parent = NULL; + root_group->my_entity = NULL; + root_group->bfqd = bfqd; +#endif + root_group->rq_pos_tree = RB_ROOT; + for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) + root_group->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT; + root_group->sched_data.bfq_class_idle_last_service = jiffies; +} + +static int bfq_init_queue(struct request_queue *q, struct elevator_type *e) +{ + struct bfq_data *bfqd; + struct elevator_queue *eq; + + eq = elevator_alloc(q, e); + if (!eq) + return -ENOMEM; + + bfqd = kzalloc_node(sizeof(*bfqd), GFP_KERNEL, q->node); + if (!bfqd) { + kobject_put(&eq->kobj); + return -ENOMEM; + } + eq->elevator_data = bfqd; + + spin_lock_irq(q->queue_lock); + q->elevator = eq; + spin_unlock_irq(q->queue_lock); + + /* + * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues. + * Grab a permanent reference to it, so that the normal code flow + * will not attempt to free it. + */ + bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0); + bfqd->oom_bfqq.ref++; + bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO; + bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE; + bfqd->oom_bfqq.entity.new_weight = + bfq_ioprio_to_weight(bfqd->oom_bfqq.new_ioprio); + + /* oom_bfqq does not participate to bursts */ + bfq_clear_bfqq_just_created(&bfqd->oom_bfqq); + /* + * Trigger weight initialization, according to ioprio, at the + * oom_bfqq's first activation. The oom_bfqq's ioprio and ioprio + * class won't be changed any more. + */ + bfqd->oom_bfqq.entity.prio_changed = 1; + + bfqd->queue = q; + INIT_LIST_HEAD(&bfqd->dispatch); + + hrtimer_init(&bfqd->idle_slice_timer, CLOCK_MONOTONIC, + HRTIMER_MODE_REL); + bfqd->idle_slice_timer.function = bfq_idle_slice_timer; + + bfqd->queue_weights_tree = RB_ROOT; + bfqd->group_weights_tree = RB_ROOT; + + INIT_LIST_HEAD(&bfqd->active_list); + INIT_LIST_HEAD(&bfqd->idle_list); + INIT_HLIST_HEAD(&bfqd->burst_list); + + bfqd->hw_tag = -1; + + bfqd->bfq_max_budget = bfq_default_max_budget; + + bfqd->bfq_fifo_expire[0] = bfq_fifo_expire[0]; + bfqd->bfq_fifo_expire[1] = bfq_fifo_expire[1]; + bfqd->bfq_back_max = bfq_back_max; + bfqd->bfq_back_penalty = bfq_back_penalty; + bfqd->bfq_slice_idle = bfq_slice_idle; + bfqd->bfq_timeout = bfq_timeout; + + bfqd->bfq_requests_within_timer = 120; + + bfqd->bfq_large_burst_thresh = 8; + bfqd->bfq_burst_interval = msecs_to_jiffies(180); + + bfqd->low_latency = true; + + /* + * Trade-off between responsiveness and fairness. + */ + bfqd->bfq_wr_coeff = 30; + bfqd->bfq_wr_rt_max_time = msecs_to_jiffies(300); + bfqd->bfq_wr_max_time = 0; + bfqd->bfq_wr_min_idle_time = msecs_to_jiffies(2000); + bfqd->bfq_wr_min_inter_arr_async = msecs_to_jiffies(500); + bfqd->bfq_wr_max_softrt_rate = 7000; /* + * Approximate rate required + * to playback or record a + * high-definition compressed + * video. + */ + bfqd->wr_busy_queues = 0; + + /* + * Begin by assuming, optimistically, that the device is a + * high-speed one, and that its peak rate is equal to 2/3 of + * the highest reference rate. + */ + bfqd->RT_prod = R_fast[blk_queue_nonrot(bfqd->queue)] * + T_fast[blk_queue_nonrot(bfqd->queue)]; + bfqd->peak_rate = R_fast[blk_queue_nonrot(bfqd->queue)] * 2 / 3; + bfqd->device_speed = BFQ_BFQD_FAST; + + spin_lock_init(&bfqd->lock); + + /* + * The invocation of the next bfq_create_group_hierarchy + * function is the head of a chain of function calls + * (bfq_create_group_hierarchy->blkcg_activate_policy-> + * blk_mq_freeze_queue) that may lead to the invocation of the + * has_work hook function. For this reason, + * bfq_create_group_hierarchy is invoked only after all + * scheduler data has been initialized, apart from the fields + * that can be initialized only after invoking + * bfq_create_group_hierarchy. This, in particular, enables + * has_work to correctly return false. Of course, to avoid + * other inconsistencies, the blk-mq stack must then refrain + * from invoking further scheduler hooks before this init + * function is finished. + */ + bfqd->root_group = bfq_create_group_hierarchy(bfqd, q->node); + if (!bfqd->root_group) + goto out_free; + bfq_init_root_group(bfqd->root_group, bfqd); + bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group); + + wbt_disable_default(q); + return 0; + +out_free: + kfree(bfqd); + kobject_put(&eq->kobj); + return -ENOMEM; +} + +static void bfq_slab_kill(void) +{ + kmem_cache_destroy(bfq_pool); +} + +static int __init bfq_slab_setup(void) +{ + bfq_pool = KMEM_CACHE(bfq_queue, 0); + if (!bfq_pool) + return -ENOMEM; + return 0; +} + +static ssize_t bfq_var_show(unsigned int var, char *page) +{ + return sprintf(page, "%u\n", var); +} + +static ssize_t bfq_var_store(unsigned long *var, const char *page, + size_t count) +{ + unsigned long new_val; + int ret = kstrtoul(page, 10, &new_val); + + if (ret == 0) + *var = new_val; + + return count; +} + +static ssize_t bfq_wr_max_time_show(struct elevator_queue *e, char *page) +{ + struct bfq_data *bfqd = e->elevator_data; + + return sprintf(page, "%d\n", bfqd->bfq_wr_max_time > 0 ? + jiffies_to_msecs(bfqd->bfq_wr_max_time) : + jiffies_to_msecs(bfq_wr_duration(bfqd))); +} + +static ssize_t bfq_weights_show(struct elevator_queue *e, char *page) +{ + struct bfq_queue *bfqq; + struct bfq_data *bfqd = e->elevator_data; + ssize_t num_char = 0; + + num_char += sprintf(page + num_char, "Tot reqs queued %d\n\n", + bfqd->queued); + + spin_lock_irq(&bfqd->lock); + + num_char += sprintf(page + num_char, "Active:\n"); + list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list) { + num_char += sprintf(page + num_char, + "pid%d: weight %hu, nr_queued %d %d, ", + bfqq->pid, + bfqq->entity.weight, + bfqq->queued[0], + bfqq->queued[1]); + num_char += sprintf(page + num_char, + "dur %d/%u\n", + jiffies_to_msecs( + jiffies - + bfqq->last_wr_start_finish), + jiffies_to_msecs(bfqq->wr_cur_max_time)); + } + + num_char += sprintf(page + num_char, "Idle:\n"); + list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list) { + num_char += sprintf(page + num_char, + "pid%d: weight %hu, dur %d/%u\n", + bfqq->pid, + bfqq->entity.weight, + jiffies_to_msecs(jiffies - + bfqq->last_wr_start_finish), + jiffies_to_msecs(bfqq->wr_cur_max_time)); + } + + spin_unlock_irq(&bfqd->lock); + + return num_char; +} + +#define SHOW_FUNCTION(__FUNC, __VAR, __CONV) \ +static ssize_t __FUNC(struct elevator_queue *e, char *page) \ +{ \ + struct bfq_data *bfqd = e->elevator_data; \ + u64 __data = __VAR; \ + if (__CONV == 1) \ + __data = jiffies_to_msecs(__data); \ + else if (__CONV == 2) \ + __data = div_u64(__data, NSEC_PER_MSEC); \ + return bfq_var_show(__data, (page)); \ +} +SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 2); +SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 2); +SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0); +SHOW_FUNCTION(bfq_back_seek_penalty_show, bfqd->bfq_back_penalty, 0); +SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 2); +SHOW_FUNCTION(bfq_max_budget_show, bfqd->bfq_user_max_budget, 0); +SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout, 1); +SHOW_FUNCTION(bfq_strict_guarantees_show, bfqd->strict_guarantees, 0); +SHOW_FUNCTION(bfq_low_latency_show, bfqd->low_latency, 0); +SHOW_FUNCTION(bfq_wr_coeff_show, bfqd->bfq_wr_coeff, 0); +SHOW_FUNCTION(bfq_wr_rt_max_time_show, bfqd->bfq_wr_rt_max_time, 1); +SHOW_FUNCTION(bfq_wr_min_idle_time_show, bfqd->bfq_wr_min_idle_time, 1); +SHOW_FUNCTION(bfq_wr_min_inter_arr_async_show, bfqd->bfq_wr_min_inter_arr_async, + 1); +SHOW_FUNCTION(bfq_wr_max_softrt_rate_show, bfqd->bfq_wr_max_softrt_rate, 0); +#undef SHOW_FUNCTION + +#define USEC_SHOW_FUNCTION(__FUNC, __VAR) \ +static ssize_t __FUNC(struct elevator_queue *e, char *page) \ +{ \ + struct bfq_data *bfqd = e->elevator_data; \ + u64 __data = __VAR; \ + __data = div_u64(__data, NSEC_PER_USEC); \ + return bfq_var_show(__data, (page)); \ +} +USEC_SHOW_FUNCTION(bfq_slice_idle_us_show, bfqd->bfq_slice_idle); +#undef USEC_SHOW_FUNCTION + +#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \ +static ssize_t \ +__FUNC(struct elevator_queue *e, const char *page, size_t count) \ +{ \ + struct bfq_data *bfqd = e->elevator_data; \ + unsigned long uninitialized_var(__data); \ + int ret = bfq_var_store(&__data, (page), count); \ + if (__data < (MIN)) \ + __data = (MIN); \ + else if (__data > (MAX)) \ + __data = (MAX); \ + if (__CONV == 1) \ + *(__PTR) = msecs_to_jiffies(__data); \ + else if (__CONV == 2) \ + *(__PTR) = (u64)__data * NSEC_PER_MSEC; \ + else \ + *(__PTR) = __data; \ + return ret; \ +} +STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1, + INT_MAX, 2); +STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1, + INT_MAX, 2); +STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0); +STORE_FUNCTION(bfq_back_seek_penalty_store, &bfqd->bfq_back_penalty, 1, + INT_MAX, 0); +STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 2); +STORE_FUNCTION(bfq_wr_coeff_store, &bfqd->bfq_wr_coeff, 1, INT_MAX, 0); +STORE_FUNCTION(bfq_wr_max_time_store, &bfqd->bfq_wr_max_time, 0, INT_MAX, 1); +STORE_FUNCTION(bfq_wr_rt_max_time_store, &bfqd->bfq_wr_rt_max_time, 0, INT_MAX, + 1); +STORE_FUNCTION(bfq_wr_min_idle_time_store, &bfqd->bfq_wr_min_idle_time, 0, + INT_MAX, 1); +STORE_FUNCTION(bfq_wr_min_inter_arr_async_store, + &bfqd->bfq_wr_min_inter_arr_async, 0, INT_MAX, 1); +STORE_FUNCTION(bfq_wr_max_softrt_rate_store, &bfqd->bfq_wr_max_softrt_rate, 0, + INT_MAX, 0); +#undef STORE_FUNCTION + +#define USEC_STORE_FUNCTION(__FUNC, __PTR, MIN, MAX) \ +static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\ +{ \ + struct bfq_data *bfqd = e->elevator_data; \ + unsigned long uninitialized_var(__data); \ + int ret = bfq_var_store(&__data, (page), count); \ + if (__data < (MIN)) \ + __data = (MIN); \ + else if (__data > (MAX)) \ + __data = (MAX); \ + *(__PTR) = (u64)__data * NSEC_PER_USEC; \ + return ret; \ +} +USEC_STORE_FUNCTION(bfq_slice_idle_us_store, &bfqd->bfq_slice_idle, 0, + UINT_MAX); +#undef USEC_STORE_FUNCTION + +/* do nothing for the moment */ +static ssize_t bfq_weights_store(struct elevator_queue *e, + const char *page, size_t count) +{ + return count; +} + +static ssize_t bfq_max_budget_store(struct elevator_queue *e, + const char *page, size_t count) +{ + struct bfq_data *bfqd = e->elevator_data; + unsigned long uninitialized_var(__data); + int ret = bfq_var_store(&__data, (page), count); + + if (__data == 0) + bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd); + else { + if (__data > INT_MAX) + __data = INT_MAX; + bfqd->bfq_max_budget = __data; + } + + bfqd->bfq_user_max_budget = __data; + + return ret; +} + +/* + * Leaving this name to preserve name compatibility with cfq + * parameters, but this timeout is used for both sync and async. + */ +static ssize_t bfq_timeout_sync_store(struct elevator_queue *e, + const char *page, size_t count) +{ + struct bfq_data *bfqd = e->elevator_data; + unsigned long uninitialized_var(__data); + int ret = bfq_var_store(&__data, (page), count); + + if (__data < 1) + __data = 1; + else if (__data > INT_MAX) + __data = INT_MAX; + + bfqd->bfq_timeout = msecs_to_jiffies(__data); + if (bfqd->bfq_user_max_budget == 0) + bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd); + + return ret; +} + +static ssize_t bfq_strict_guarantees_store(struct elevator_queue *e, + const char *page, size_t count) +{ + struct bfq_data *bfqd = e->elevator_data; + unsigned long uninitialized_var(__data); + int ret = bfq_var_store(&__data, (page), count); + + if (__data > 1) + __data = 1; + if (!bfqd->strict_guarantees && __data == 1 + && bfqd->bfq_slice_idle < 8 * NSEC_PER_MSEC) + bfqd->bfq_slice_idle = 8 * NSEC_PER_MSEC; + + bfqd->strict_guarantees = __data; + + return ret; +} + +static ssize_t bfq_low_latency_store(struct elevator_queue *e, + const char *page, size_t count) +{ + struct bfq_data *bfqd = e->elevator_data; + unsigned long uninitialized_var(__data); + int ret = bfq_var_store(&__data, (page), count); + + if (__data > 1) + __data = 1; + if (__data == 0 && bfqd->low_latency != 0) + bfq_end_wr(bfqd); + bfqd->low_latency = __data; + + return ret; +} + +#define BFQ_ATTR(name) \ + __ATTR(name, S_IRUGO|S_IWUSR, bfq_##name##_show, bfq_##name##_store) + +static struct elv_fs_entry bfq_attrs[] = { + BFQ_ATTR(fifo_expire_sync), + BFQ_ATTR(fifo_expire_async), + BFQ_ATTR(back_seek_max), + BFQ_ATTR(back_seek_penalty), + BFQ_ATTR(slice_idle), + BFQ_ATTR(slice_idle_us), + BFQ_ATTR(max_budget), + BFQ_ATTR(timeout_sync), + BFQ_ATTR(strict_guarantees), + BFQ_ATTR(low_latency), + BFQ_ATTR(wr_coeff), + BFQ_ATTR(wr_max_time), + BFQ_ATTR(wr_rt_max_time), + BFQ_ATTR(wr_min_idle_time), + BFQ_ATTR(wr_min_inter_arr_async), + BFQ_ATTR(wr_max_softrt_rate), + BFQ_ATTR(weights), + __ATTR_NULL +}; + +static struct elevator_type iosched_bfq_mq = { + .ops.mq = { + .prepare_request = bfq_prepare_request, + .finish_request = bfq_finish_request, + .exit_icq = bfq_exit_icq, + .insert_requests = bfq_insert_requests, + .dispatch_request = bfq_dispatch_request, + .next_request = elv_rb_latter_request, + .former_request = elv_rb_former_request, + .allow_merge = bfq_allow_bio_merge, + .bio_merge = bfq_bio_merge, + .request_merge = bfq_request_merge, + .requests_merged = bfq_requests_merged, + .request_merged = bfq_request_merged, + .has_work = bfq_has_work, + .init_sched = bfq_init_queue, + .exit_sched = bfq_exit_queue, + }, + + .uses_mq = true, + .icq_size = sizeof(struct bfq_io_cq), + .icq_align = __alignof__(struct bfq_io_cq), + .elevator_attrs = bfq_attrs, + .elevator_name = "bfq-mq", + .elevator_owner = THIS_MODULE, +}; + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +static struct blkcg_policy blkcg_policy_bfq = { + .dfl_cftypes = bfq_blkg_files, + .legacy_cftypes = bfq_blkcg_legacy_files, + + .cpd_alloc_fn = bfq_cpd_alloc, + .cpd_init_fn = bfq_cpd_init, + .cpd_bind_fn = bfq_cpd_init, + .cpd_free_fn = bfq_cpd_free, + + .pd_alloc_fn = bfq_pd_alloc, + .pd_init_fn = bfq_pd_init, + .pd_offline_fn = bfq_pd_offline, + .pd_free_fn = bfq_pd_free, + .pd_reset_stats_fn = bfq_pd_reset_stats, +}; +#endif + +static int __init bfq_init(void) +{ + int ret; + char msg[60] = "BFQ I/O-scheduler: v8r12"; + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + ret = blkcg_policy_register(&blkcg_policy_bfq); + if (ret) + return ret; +#endif + + ret = -ENOMEM; + if (bfq_slab_setup()) + goto err_pol_unreg; + + /* + * Times to load large popular applications for the typical + * systems installed on the reference devices (see the + * comments before the definitions of the next two + * arrays). Actually, we use slightly slower values, as the + * estimated peak rate tends to be smaller than the actual + * peak rate. The reason for this last fact is that estimates + * are computed over much shorter time intervals than the long + * intervals typically used for benchmarking. Why? First, to + * adapt more quickly to variations. Second, because an I/O + * scheduler cannot rely on a peak-rate-evaluation workload to + * be run for a long time. + */ + T_slow[0] = msecs_to_jiffies(3500); /* actually 4 sec */ + T_slow[1] = msecs_to_jiffies(6000); /* actually 6.5 sec */ + T_fast[0] = msecs_to_jiffies(7000); /* actually 8 sec */ + T_fast[1] = msecs_to_jiffies(2500); /* actually 3 sec */ + + /* + * Thresholds that determine the switch between speed classes + * (see the comments before the definition of the array + * device_speed_thresh). These thresholds are biased towards + * transitions to the fast class. This is safer than the + * opposite bias. In fact, a wrong transition to the slow + * class results in short weight-raising periods, because the + * speed of the device then tends to be higher that the + * reference peak rate. On the opposite end, a wrong + * transition to the fast class tends to increase + * weight-raising periods, because of the opposite reason. + */ + device_speed_thresh[0] = (4 * R_slow[0]) / 3; + device_speed_thresh[1] = (4 * R_slow[1]) / 3; + + ret = elv_register(&iosched_bfq_mq); + if (ret) + goto err_pol_unreg; + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + strcat(msg, " (with cgroups support)"); +#endif + pr_info("%s", msg); + + return 0; + +err_pol_unreg: +#ifdef BFQ_GROUP_IOSCHED_ENABLED + blkcg_policy_unregister(&blkcg_policy_bfq); +#endif + return ret; +} + +static void __exit bfq_exit(void) +{ + elv_unregister(&iosched_bfq_mq); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + blkcg_policy_unregister(&blkcg_policy_bfq); +#endif + bfq_slab_kill(); +} + +module_init(bfq_init); +module_exit(bfq_exit); + +MODULE_AUTHOR("Paolo Valente"); +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("MQ Budget Fair Queueing I/O Scheduler"); diff --git b/block/bfq-mq.h b/block/bfq-mq.h new file mode 100644 index 0000000..1cb05bb --- /dev/null +++ b/block/bfq-mq.h @@ -0,0 +1,987 @@ +/* + * BFQ v8r12 for 4.11.0: data structures and common functions prototypes. + * + * Based on ideas and code from CFQ: + * Copyright (C) 2003 Jens Axboe + * + * Copyright (C) 2008 Fabio Checconi + * Paolo Valente + * + * Copyright (C) 2015 Paolo Valente + * + * Copyright (C) 2017 Paolo Valente + */ + +#ifndef _BFQ_H +#define _BFQ_H + +#include +#include + +/* see comments on CONFIG_BFQ_GROUP_IOSCHED in bfq.h */ +#ifdef CONFIG_MQ_BFQ_GROUP_IOSCHED +#define BFQ_GROUP_IOSCHED_ENABLED +#endif + +#define BFQ_IOPRIO_CLASSES 3 +#define BFQ_CL_IDLE_TIMEOUT (HZ/5) + +#define BFQ_MIN_WEIGHT 1 +#define BFQ_MAX_WEIGHT 1000 +#define BFQ_WEIGHT_CONVERSION_COEFF 10 + +#define BFQ_DEFAULT_QUEUE_IOPRIO 4 + +#define BFQ_WEIGHT_LEGACY_DFL 100 +#define BFQ_DEFAULT_GRP_IOPRIO 0 +#define BFQ_DEFAULT_GRP_CLASS IOPRIO_CLASS_BE + +/* + * Soft real-time applications are extremely more latency sensitive + * than interactive ones. Over-raise the weight of the former to + * privilege them against the latter. + */ +#define BFQ_SOFTRT_WEIGHT_FACTOR 100 + +struct bfq_entity; + +/** + * struct bfq_service_tree - per ioprio_class service tree. + * + * Each service tree represents a B-WF2Q+ scheduler on its own. Each + * ioprio_class has its own independent scheduler, and so its own + * bfq_service_tree. All the fields are protected by the queue lock + * of the containing bfqd. + */ +struct bfq_service_tree { + /* tree for active entities (i.e., those backlogged) */ + struct rb_root active; + /* tree for idle entities (i.e., not backlogged, with V <= F_i)*/ + struct rb_root idle; + + struct bfq_entity *first_idle; /* idle entity with minimum F_i */ + struct bfq_entity *last_idle; /* idle entity with maximum F_i */ + + u64 vtime; /* scheduler virtual time */ + /* scheduler weight sum; active and idle entities contribute to it */ + unsigned long wsum; +}; + +/** + * struct bfq_sched_data - multi-class scheduler. + * + * bfq_sched_data is the basic scheduler queue. It supports three + * ioprio_classes, and can be used either as a toplevel queue or as an + * intermediate queue in a hierarchical setup. + * + * The supported ioprio_classes are the same as in CFQ, in descending + * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE. + * Requests from higher priority queues are served before all the + * requests from lower priority queues; among requests of the same + * queue requests are served according to B-WF2Q+. + * + * The schedule is implemented by the service trees, plus the field + * @next_in_service, which points to the entity on the active trees + * that will be served next, if 1) no changes in the schedule occurs + * before the current in-service entity is expired, 2) the in-service + * queue becomes idle when it expires, and 3) if the entity pointed by + * in_service_entity is not a queue, then the in-service child entity + * of the entity pointed by in_service_entity becomes idle on + * expiration. This peculiar definition allows for the following + * optimization, not yet exploited: while a given entity is still in + * service, we already know which is the best candidate for next + * service among the other active entitities in the same parent + * entity. We can then quickly compare the timestamps of the + * in-service entity with those of such best candidate. + * + * All the fields are protected by the queue lock of the containing + * bfqd. + */ +struct bfq_sched_data { + struct bfq_entity *in_service_entity; /* entity in service */ + /* head-of-the-line entity in the scheduler (see comments above) */ + struct bfq_entity *next_in_service; + /* array of service trees, one per ioprio_class */ + struct bfq_service_tree service_tree[BFQ_IOPRIO_CLASSES]; + /* last time CLASS_IDLE was served */ + unsigned long bfq_class_idle_last_service; + +}; + +/** + * struct bfq_weight_counter - counter of the number of all active entities + * with a given weight. + */ +struct bfq_weight_counter { + unsigned int weight; /* weight of the entities this counter refers to */ + unsigned int num_active; /* nr of active entities with this weight */ + /* + * Weights tree member (see bfq_data's @queue_weights_tree and + * @group_weights_tree) + */ + struct rb_node weights_node; +}; + +/** + * struct bfq_entity - schedulable entity. + * + * A bfq_entity is used to represent either a bfq_queue (leaf node in the + * cgroup hierarchy) or a bfq_group into the upper level scheduler. Each + * entity belongs to the sched_data of the parent group in the cgroup + * hierarchy. Non-leaf entities have also their own sched_data, stored + * in @my_sched_data. + * + * Each entity stores independently its priority values; this would + * allow different weights on different devices, but this + * functionality is not exported to userspace by now. Priorities and + * weights are updated lazily, first storing the new values into the + * new_* fields, then setting the @prio_changed flag. As soon as + * there is a transition in the entity state that allows the priority + * update to take place the effective and the requested priority + * values are synchronized. + * + * Unless cgroups are used, the weight value is calculated from the + * ioprio to export the same interface as CFQ. When dealing with + * ``well-behaved'' queues (i.e., queues that do not spend too much + * time to consume their budget and have true sequential behavior, and + * when there are no external factors breaking anticipation) the + * relative weights at each level of the cgroups hierarchy should be + * guaranteed. All the fields are protected by the queue lock of the + * containing bfqd. + */ +struct bfq_entity { + struct rb_node rb_node; /* service_tree member */ + /* pointer to the weight counter associated with this entity */ + struct bfq_weight_counter *weight_counter; + + /* + * Flag, true if the entity is on a tree (either the active or + * the idle one of its service_tree) or is in service. + */ + bool on_st; + + u64 finish; /* B-WF2Q+ finish timestamp (aka F_i) */ + u64 start; /* B-WF2Q+ start timestamp (aka S_i) */ + + /* tree the entity is enqueued into; %NULL if not on a tree */ + struct rb_root *tree; + + /* + * minimum start time of the (active) subtree rooted at this + * entity; used for O(log N) lookups into active trees + */ + u64 min_start; + + /* amount of service received during the last service slot */ + int service; + + /* budget, used also to calculate F_i: F_i = S_i + @budget / @weight */ + int budget; + + unsigned int weight; /* weight of the queue */ + unsigned int new_weight; /* next weight if a change is in progress */ + + /* original weight, used to implement weight boosting */ + unsigned int orig_weight; + + /* parent entity, for hierarchical scheduling */ + struct bfq_entity *parent; + + /* + * For non-leaf nodes in the hierarchy, the associated + * scheduler queue, %NULL on leaf nodes. + */ + struct bfq_sched_data *my_sched_data; + /* the scheduler queue this entity belongs to */ + struct bfq_sched_data *sched_data; + + /* flag, set to request a weight, ioprio or ioprio_class change */ + int prio_changed; +}; + +struct bfq_group; + +/** + * struct bfq_ttime - per process thinktime stats. + */ +struct bfq_ttime { + u64 last_end_request; /* completion time of last request */ + + u64 ttime_total; /* total process thinktime */ + unsigned long ttime_samples; /* number of thinktime samples */ + u64 ttime_mean; /* average process thinktime */ + +}; + +/** + * struct bfq_queue - leaf schedulable entity. + * + * A bfq_queue is a leaf request queue; it can be associated with an + * io_context or more, if it is async or shared between cooperating + * processes. @cgroup holds a reference to the cgroup, to be sure that it + * does not disappear while a bfqq still references it (mostly to avoid + * races between request issuing and task migration followed by cgroup + * destruction). + * All the fields are protected by the queue lock of the containing bfqd. + */ +struct bfq_queue { + /* reference counter */ + int ref; + /* parent bfq_data */ + struct bfq_data *bfqd; + + /* current ioprio and ioprio class */ + unsigned short ioprio, ioprio_class; + /* next ioprio and ioprio class if a change is in progress */ + unsigned short new_ioprio, new_ioprio_class; + + /* + * Shared bfq_queue if queue is cooperating with one or more + * other queues. + */ + struct bfq_queue *new_bfqq; + /* request-position tree member (see bfq_group's @rq_pos_tree) */ + struct rb_node pos_node; + /* request-position tree root (see bfq_group's @rq_pos_tree) */ + struct rb_root *pos_root; + + /* sorted list of pending requests */ + struct rb_root sort_list; + /* if fifo isn't expired, next request to serve */ + struct request *next_rq; + /* number of sync and async requests queued */ + int queued[2]; + /* number of requests currently allocated */ + int allocated; + /* number of pending metadata requests */ + int meta_pending; + /* fifo list of requests in sort_list */ + struct list_head fifo; + + /* entity representing this queue in the scheduler */ + struct bfq_entity entity; + + /* maximum budget allowed from the feedback mechanism */ + int max_budget; + /* budget expiration (in jiffies) */ + unsigned long budget_timeout; + + /* number of requests on the dispatch list or inside driver */ + int dispatched; + + unsigned int flags; /* status flags.*/ + + /* node for active/idle bfqq list inside parent bfqd */ + struct list_head bfqq_list; + + /* associated @bfq_ttime struct */ + struct bfq_ttime ttime; + + /* bit vector: a 1 for each seeky requests in history */ + u32 seek_history; + + /* node for the device's burst list */ + struct hlist_node burst_list_node; + + /* position of the last request enqueued */ + sector_t last_request_pos; + + /* Number of consecutive pairs of request completion and + * arrival, such that the queue becomes idle after the + * completion, but the next request arrives within an idle + * time slice; used only if the queue's IO_bound flag has been + * cleared. + */ + unsigned int requests_within_timer; + + /* pid of the process owning the queue, used for logging purposes */ + pid_t pid; + + /* + * Pointer to the bfq_io_cq owning the bfq_queue, set to %NULL + * if the queue is shared. + */ + struct bfq_io_cq *bic; + + /* current maximum weight-raising time for this queue */ + unsigned long wr_cur_max_time; + /* + * Minimum time instant such that, only if a new request is + * enqueued after this time instant in an idle @bfq_queue with + * no outstanding requests, then the task associated with the + * queue it is deemed as soft real-time (see the comments on + * the function bfq_bfqq_softrt_next_start()) + */ + unsigned long soft_rt_next_start; + /* + * Start time of the current weight-raising period if + * the @bfq-queue is being weight-raised, otherwise + * finish time of the last weight-raising period. + */ + unsigned long last_wr_start_finish; + /* factor by which the weight of this queue is multiplied */ + unsigned int wr_coeff; + /* + * Time of the last transition of the @bfq_queue from idle to + * backlogged. + */ + unsigned long last_idle_bklogged; + /* + * Cumulative service received from the @bfq_queue since the + * last transition from idle to backlogged. + */ + unsigned long service_from_backlogged; + /* + * Value of wr start time when switching to soft rt + */ + unsigned long wr_start_at_switch_to_srt; + + unsigned long split_time; /* time of last split */ +}; + +/** + * struct bfq_io_cq - per (request_queue, io_context) structure. + */ +struct bfq_io_cq { + /* associated io_cq structure */ + struct io_cq icq; /* must be the first member */ + /* array of two process queues, the sync and the async */ + struct bfq_queue *bfqq[2]; + /* per (request_queue, blkcg) ioprio */ + int ioprio; +#ifdef BFQ_GROUP_IOSCHED_ENABLED + uint64_t blkcg_serial_nr; /* the current blkcg serial */ +#endif + + /* + * Snapshot of the has_short_time flag before merging; taken + * to remember its value while the queue is merged, so as to + * be able to restore it in case of split. + */ + bool saved_has_short_ttime; + /* + * Same purpose as the previous two fields for the I/O bound + * classification of a queue. + */ + bool saved_IO_bound; + + /* + * Same purpose as the previous fields for the value of the + * field keeping the queue's belonging to a large burst + */ + bool saved_in_large_burst; + /* + * True if the queue belonged to a burst list before its merge + * with another cooperating queue. + */ + bool was_in_burst_list; + + /* + * Similar to previous fields: save wr information. + */ + unsigned long saved_wr_coeff; + unsigned long saved_last_wr_start_finish; + unsigned long saved_wr_start_at_switch_to_srt; + unsigned int saved_wr_cur_max_time; + struct bfq_ttime saved_ttime; +}; + +enum bfq_device_speed { + BFQ_BFQD_FAST, + BFQ_BFQD_SLOW, +}; + +/** + * struct bfq_data - per-device data structure. + * + * All the fields are protected by @lock. + */ +struct bfq_data { + /* device request queue */ + struct request_queue *queue; + /* dispatch queue */ + struct list_head dispatch; + + /* root bfq_group for the device */ + struct bfq_group *root_group; + + /* + * rbtree of weight counters of @bfq_queues, sorted by + * weight. Used to keep track of whether all @bfq_queues have + * the same weight. The tree contains one counter for each + * distinct weight associated to some active and not + * weight-raised @bfq_queue (see the comments to the functions + * bfq_weights_tree_[add|remove] for further details). + */ + struct rb_root queue_weights_tree; + /* + * rbtree of non-queue @bfq_entity weight counters, sorted by + * weight. Used to keep track of whether all @bfq_groups have + * the same weight. The tree contains one counter for each + * distinct weight associated to some active @bfq_group (see + * the comments to the functions bfq_weights_tree_[add|remove] + * for further details). + */ + struct rb_root group_weights_tree; + + /* + * Number of bfq_queues containing requests (including the + * queue in service, even if it is idling). + */ + int busy_queues; + /* number of weight-raised busy @bfq_queues */ + int wr_busy_queues; + /* number of queued requests */ + int queued; + /* number of requests dispatched and waiting for completion */ + int rq_in_driver; + + /* + * Maximum number of requests in driver in the last + * @hw_tag_samples completed requests. + */ + int max_rq_in_driver; + /* number of samples used to calculate hw_tag */ + int hw_tag_samples; + /* flag set to one if the driver is showing a queueing behavior */ + int hw_tag; + + /* number of budgets assigned */ + int budgets_assigned; + + /* + * Timer set when idling (waiting) for the next request from + * the queue in service. + */ + struct hrtimer idle_slice_timer; + + /* bfq_queue in service */ + struct bfq_queue *in_service_queue; + + /* on-disk position of the last served request */ + sector_t last_position; + + /* time of last request completion (ns) */ + u64 last_completion; + + /* time of first rq dispatch in current observation interval (ns) */ + u64 first_dispatch; + /* time of last rq dispatch in current observation interval (ns) */ + u64 last_dispatch; + + /* beginning of the last budget */ + ktime_t last_budget_start; + /* beginning of the last idle slice */ + ktime_t last_idling_start; + + /* number of samples in current observation interval */ + int peak_rate_samples; + /* num of samples of seq dispatches in current observation interval */ + u32 sequential_samples; + /* total num of sectors transferred in current observation interval */ + u64 tot_sectors_dispatched; + /* max rq size seen during current observation interval (sectors) */ + u32 last_rq_max_size; + /* time elapsed from first dispatch in current observ. interval (us) */ + u64 delta_from_first; + /* current estimate of device peak rate */ + u32 peak_rate; + + /* maximum budget allotted to a bfq_queue before rescheduling */ + int bfq_max_budget; + + /* list of all the bfq_queues active on the device */ + struct list_head active_list; + /* list of all the bfq_queues idle on the device */ + struct list_head idle_list; + + /* + * Timeout for async/sync requests; when it fires, requests + * are served in fifo order. + */ + u64 bfq_fifo_expire[2]; + /* weight of backward seeks wrt forward ones */ + unsigned int bfq_back_penalty; + /* maximum allowed backward seek */ + unsigned int bfq_back_max; + /* maximum idling time */ + u32 bfq_slice_idle; + + /* user-configured max budget value (0 for auto-tuning) */ + int bfq_user_max_budget; + /* + * Timeout for bfq_queues to consume their budget; used to + * prevent seeky queues from imposing long latencies to + * sequential or quasi-sequential ones (this also implies that + * seeky queues cannot receive guarantees in the service + * domain; after a timeout they are charged for the time they + * have been in service, to preserve fairness among them, but + * without service-domain guarantees). + */ + unsigned int bfq_timeout; + + /* + * Number of consecutive requests that must be issued within + * the idle time slice to set again idling to a queue which + * was marked as non-I/O-bound (see the definition of the + * IO_bound flag for further details). + */ + unsigned int bfq_requests_within_timer; + + /* + * Force device idling whenever needed to provide accurate + * service guarantees, without caring about throughput + * issues. CAVEAT: this may even increase latencies, in case + * of useless idling for processes that did stop doing I/O. + */ + bool strict_guarantees; + + /* + * Last time at which a queue entered the current burst of + * queues being activated shortly after each other; for more + * details about this and the following parameters related to + * a burst of activations, see the comments on the function + * bfq_handle_burst. + */ + unsigned long last_ins_in_burst; + /* + * Reference time interval used to decide whether a queue has + * been activated shortly after @last_ins_in_burst. + */ + unsigned long bfq_burst_interval; + /* number of queues in the current burst of queue activations */ + int burst_size; + + /* common parent entity for the queues in the burst */ + struct bfq_entity *burst_parent_entity; + /* Maximum burst size above which the current queue-activation + * burst is deemed as 'large'. + */ + unsigned long bfq_large_burst_thresh; + /* true if a large queue-activation burst is in progress */ + bool large_burst; + /* + * Head of the burst list (as for the above fields, more + * details in the comments on the function bfq_handle_burst). + */ + struct hlist_head burst_list; + + /* if set to true, low-latency heuristics are enabled */ + bool low_latency; + /* + * Maximum factor by which the weight of a weight-raised queue + * is multiplied. + */ + unsigned int bfq_wr_coeff; + /* maximum duration of a weight-raising period (jiffies) */ + unsigned int bfq_wr_max_time; + + /* Maximum weight-raising duration for soft real-time processes */ + unsigned int bfq_wr_rt_max_time; + /* + * Minimum idle period after which weight-raising may be + * reactivated for a queue (in jiffies). + */ + unsigned int bfq_wr_min_idle_time; + /* + * Minimum period between request arrivals after which + * weight-raising may be reactivated for an already busy async + * queue (in jiffies). + */ + unsigned long bfq_wr_min_inter_arr_async; + + /* Max service-rate for a soft real-time queue, in sectors/sec */ + unsigned int bfq_wr_max_softrt_rate; + /* + * Cached value of the product R*T, used for computing the + * maximum duration of weight raising automatically. + */ + u64 RT_prod; + /* device-speed class for the low-latency heuristic */ + enum bfq_device_speed device_speed; + + /* fallback dummy bfqq for extreme OOM conditions */ + struct bfq_queue oom_bfqq; + + spinlock_t lock; + + /* + * bic associated with the task issuing current bio for + * merging. This and the next field are used as a support to + * be able to perform the bic lookup, needed by bio-merge + * functions, before the scheduler lock is taken, and thus + * avoid taking the request-queue lock while the scheduler + * lock is being held. + */ + struct bfq_io_cq *bio_bic; + /* bfqq associated with the task issuing current bio for merging */ + struct bfq_queue *bio_bfqq; + /* Extra flag used only for TESTING */ + bool bio_bfqq_set; +}; + +enum bfqq_state_flags { + BFQ_BFQQ_FLAG_just_created = 0, /* queue just allocated */ + BFQ_BFQQ_FLAG_busy, /* has requests or is in service */ + BFQ_BFQQ_FLAG_wait_request, /* waiting for a request */ + BFQ_BFQQ_FLAG_non_blocking_wait_rq, /* + * waiting for a request + * without idling the device + */ + BFQ_BFQQ_FLAG_fifo_expire, /* FIFO checked in this slice */ + BFQ_BFQQ_FLAG_has_short_ttime, /* queue has a short think time */ + BFQ_BFQQ_FLAG_sync, /* synchronous queue */ + BFQ_BFQQ_FLAG_IO_bound, /* + * bfqq has timed-out at least once + * having consumed at most 2/10 of + * its budget + */ + BFQ_BFQQ_FLAG_in_large_burst, /* + * bfqq activated in a large burst, + * see comments to bfq_handle_burst. + */ + BFQ_BFQQ_FLAG_softrt_update, /* + * may need softrt-next-start + * update + */ + BFQ_BFQQ_FLAG_coop, /* bfqq is shared */ + BFQ_BFQQ_FLAG_split_coop /* shared bfqq will be split */ +}; + +#define BFQ_BFQQ_FNS(name) \ +static void bfq_mark_bfqq_##name(struct bfq_queue *bfqq) \ +{ \ + (bfqq)->flags |= (1 << BFQ_BFQQ_FLAG_##name); \ +} \ +static void bfq_clear_bfqq_##name(struct bfq_queue *bfqq) \ +{ \ + (bfqq)->flags &= ~(1 << BFQ_BFQQ_FLAG_##name); \ +} \ +static int bfq_bfqq_##name(const struct bfq_queue *bfqq) \ +{ \ + return ((bfqq)->flags & (1 << BFQ_BFQQ_FLAG_##name)) != 0; \ +} + +BFQ_BFQQ_FNS(just_created); +BFQ_BFQQ_FNS(busy); +BFQ_BFQQ_FNS(wait_request); +BFQ_BFQQ_FNS(non_blocking_wait_rq); +BFQ_BFQQ_FNS(fifo_expire); +BFQ_BFQQ_FNS(has_short_ttime); +BFQ_BFQQ_FNS(sync); +BFQ_BFQQ_FNS(IO_bound); +BFQ_BFQQ_FNS(in_large_burst); +BFQ_BFQQ_FNS(coop); +BFQ_BFQQ_FNS(split_coop); +BFQ_BFQQ_FNS(softrt_update); +#undef BFQ_BFQQ_FNS + +/* Logging facilities. */ +#ifdef CONFIG_BFQ_REDIRECT_TO_CONSOLE + +static const char *checked_dev_name(const struct device *dev) +{ + static const char nodev[] = "nodev"; + + if (dev) + return dev_name(dev); + + return nodev; +} + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +static struct bfq_group *bfqq_group(struct bfq_queue *bfqq); +static struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg); + +#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) do { \ + pr_crit("%s bfq%d%c %s " fmt "\n", \ + checked_dev_name((bfqd)->queue->backing_dev_info->dev), \ + (bfqq)->pid, \ + bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \ + bfqq_group(bfqq)->blkg_path, ##args); \ +} while (0) + +#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do { \ + pr_crit("%s %s " fmt "\n", \ + checked_dev_name((bfqd)->queue->backing_dev_info->dev), \ + bfqg->blkg_path, ##args); \ +} while (0) + +#else /* BFQ_GROUP_IOSCHED_ENABLED */ + +#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \ + pr_crit("%s bfq%d%c " fmt "\n", \ + checked_dev_name((bfqd)->queue->backing_dev_info->dev), \ + (bfqq)->pid, bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \ + ##args) +#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do {} while (0) + +#endif /* BFQ_GROUP_IOSCHED_ENABLED */ + +#define bfq_log(bfqd, fmt, args...) \ + pr_crit("%s bfq " fmt "\n", \ + checked_dev_name((bfqd)->queue->backing_dev_info->dev), \ + ##args) + +#else /* CONFIG_BFQ_REDIRECT_TO_CONSOLE */ + +#if !defined(CONFIG_BLK_DEV_IO_TRACE) + +/* Avoid possible "unused-variable" warning. See commit message. */ + +#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) ((void) (bfqq)) + +#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) ((void) (bfqg)) + +#define bfq_log(bfqd, fmt, args...) do {} while (0) + +#else /* CONFIG_BLK_DEV_IO_TRACE */ + +#include + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +static struct bfq_group *bfqq_group(struct bfq_queue *bfqq); +static struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg); + +#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) do { \ + blk_add_trace_msg((bfqd)->queue, "bfq%d%c %s " fmt, \ + (bfqq)->pid, \ + bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \ + bfqq_group(bfqq)->blkg_path, ##args); \ +} while (0) + +#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do { \ + blk_add_trace_msg((bfqd)->queue, "%s " fmt, bfqg->blkg_path, ##args);\ +} while (0) + +#else /* BFQ_GROUP_IOSCHED_ENABLED */ + +#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \ + blk_add_trace_msg((bfqd)->queue, "bfq%d%c " fmt, (bfqq)->pid, \ + bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \ + ##args) +#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do {} while (0) + +#endif /* BFQ_GROUP_IOSCHED_ENABLED */ + +#define bfq_log(bfqd, fmt, args...) \ + blk_add_trace_msg((bfqd)->queue, "bfq " fmt, ##args) + +#endif /* CONFIG_BLK_DEV_IO_TRACE */ +#endif /* CONFIG_BFQ_REDIRECT_TO_CONSOLE */ + +/* Expiration reasons. */ +enum bfqq_expiration { + BFQ_BFQQ_TOO_IDLE = 0, /* + * queue has been idling for + * too long + */ + BFQ_BFQQ_BUDGET_TIMEOUT, /* budget took too long to be used */ + BFQ_BFQQ_BUDGET_EXHAUSTED, /* budget consumed */ + BFQ_BFQQ_NO_MORE_REQUESTS, /* the queue has no more requests */ + BFQ_BFQQ_PREEMPTED /* preemption in progress */ +}; + + +struct bfqg_stats { +#if defined(BFQ_GROUP_IOSCHED_ENABLED) && defined(CONFIG_DEBUG_BLK_CGROUP) + /* number of ios merged */ + struct blkg_rwstat merged; + /* total time spent on device in ns, may not be accurate w/ queueing */ + struct blkg_rwstat service_time; + /* total time spent waiting in scheduler queue in ns */ + struct blkg_rwstat wait_time; + /* number of IOs queued up */ + struct blkg_rwstat queued; + /* total disk time and nr sectors dispatched by this group */ + struct blkg_stat time; + /* sum of number of ios queued across all samples */ + struct blkg_stat avg_queue_size_sum; + /* count of samples taken for average */ + struct blkg_stat avg_queue_size_samples; + /* how many times this group has been removed from service tree */ + struct blkg_stat dequeue; + /* total time spent waiting for it to be assigned a timeslice. */ + struct blkg_stat group_wait_time; + /* time spent idling for this blkcg_gq */ + struct blkg_stat idle_time; + /* total time with empty current active q with other requests queued */ + struct blkg_stat empty_time; + /* fields after this shouldn't be cleared on stat reset */ + uint64_t start_group_wait_time; + uint64_t start_idle_time; + uint64_t start_empty_time; + uint16_t flags; +#endif /* BFQ_GROUP_IOSCHED_ENABLED && CONFIG_DEBUG_BLK_CGROUP */ +}; + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +/* + * struct bfq_group_data - per-blkcg storage for the blkio subsystem. + * + * @ps: @blkcg_policy_storage that this structure inherits + * @weight: weight of the bfq_group + */ +struct bfq_group_data { + /* must be the first member */ + struct blkcg_policy_data pd; + + unsigned int weight; +}; + +/** + * struct bfq_group - per (device, cgroup) data structure. + * @entity: schedulable entity to insert into the parent group sched_data. + * @sched_data: own sched_data, to contain child entities (they may be + * both bfq_queues and bfq_groups). + * @bfqd: the bfq_data for the device this group acts upon. + * @async_bfqq: array of async queues for all the tasks belonging to + * the group, one queue per ioprio value per ioprio_class, + * except for the idle class that has only one queue. + * @async_idle_bfqq: async queue for the idle class (ioprio is ignored). + * @my_entity: pointer to @entity, %NULL for the toplevel group; used + * to avoid too many special cases during group creation/ + * migration. + * @active_entities: number of active entities belonging to the group; + * unused for the root group. Used to know whether there + * are groups with more than one active @bfq_entity + * (see the comments to the function + * bfq_bfqq_may_idle()). + * @rq_pos_tree: rbtree sorted by next_request position, used when + * determining if two or more queues have interleaving + * requests (see bfq_find_close_cooperator()). + * + * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup + * there is a set of bfq_groups, each one collecting the lower-level + * entities belonging to the group that are acting on the same device. + * + * Locking works as follows: + * o @bfqd is protected by the queue lock, RCU is used to access it + * from the readers. + * o All the other fields are protected by the @bfqd queue lock. + */ +struct bfq_group { + /* must be the first member */ + struct blkg_policy_data pd; + + /* cached path for this blkg (see comments in bfq_bic_update_cgroup) */ + char blkg_path[128]; + + /* reference counter (see comments in bfq_bic_update_cgroup) */ + int ref; + + struct bfq_entity entity; + struct bfq_sched_data sched_data; + + void *bfqd; + + struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR]; + struct bfq_queue *async_idle_bfqq; + + struct bfq_entity *my_entity; + + int active_entities; + + struct rb_root rq_pos_tree; + + struct bfqg_stats stats; +}; + +#else +struct bfq_group { + struct bfq_sched_data sched_data; + + struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR]; + struct bfq_queue *async_idle_bfqq; + + struct rb_root rq_pos_tree; +}; +#endif + +static struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity); + +static unsigned int bfq_class_idx(struct bfq_entity *entity) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + + return bfqq ? bfqq->ioprio_class - 1 : + BFQ_DEFAULT_GRP_CLASS - 1; +} + +static struct bfq_service_tree * +bfq_entity_service_tree(struct bfq_entity *entity) +{ + struct bfq_sched_data *sched_data = entity->sched_data; + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + unsigned int idx = bfq_class_idx(entity); + + BUG_ON(idx >= BFQ_IOPRIO_CLASSES); + BUG_ON(sched_data == NULL); + + if (bfqq) + bfq_log_bfqq(bfqq->bfqd, bfqq, + "entity_service_tree %p %d", + sched_data->service_tree + idx, idx); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { + struct bfq_group *bfqg = + container_of(entity, struct bfq_group, entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "entity_service_tree %p %d", + sched_data->service_tree + idx, idx); + } +#endif + return sched_data->service_tree + idx; +} + +static struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync) +{ + return bic->bfqq[is_sync]; +} + +static void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, + bool is_sync) +{ + bic->bfqq[is_sync] = bfqq; +} + +static struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic) +{ + return bic->icq.q->elevator->elevator_data; +} + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + +static struct bfq_group *bfq_bfqq_to_bfqg(struct bfq_queue *bfqq) +{ + struct bfq_entity *group_entity = bfqq->entity.parent; + + if (!group_entity) + group_entity = &bfqq->bfqd->root_group->entity; + + return container_of(group_entity, struct bfq_group, entity); +} + +#else + +static struct bfq_group *bfq_bfqq_to_bfqg(struct bfq_queue *bfqq) +{ + return bfqq->bfqd->root_group; +} + +#endif + +static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio); +static void bfq_put_queue(struct bfq_queue *bfqq); +static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd, + struct bio *bio, bool is_sync, + struct bfq_io_cq *bic); +static void bfq_end_wr_async_queues(struct bfq_data *bfqd, + struct bfq_group *bfqg); +#ifdef BFQ_GROUP_IOSCHED_ENABLED +static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg); +#endif +static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq); + +#endif /* _BFQ_H */ diff --git b/block/bfq-sched.c b/block/bfq-sched.c new file mode 100644 index 0000000..616c069 --- /dev/null +++ b/block/bfq-sched.c @@ -0,0 +1,2059 @@ +/* + * BFQ: Hierarchical B-WF2Q+ scheduler. + * + * Based on ideas and code from CFQ: + * Copyright (C) 2003 Jens Axboe + * + * Copyright (C) 2008 Fabio Checconi + * Paolo Valente + * + * Copyright (C) 2015 Paolo Valente + * + * Copyright (C) 2016 Paolo Valente + */ + +static struct bfq_group *bfqq_group(struct bfq_queue *bfqq); + +/** + * bfq_gt - compare two timestamps. + * @a: first ts. + * @b: second ts. + * + * Return @a > @b, dealing with wrapping correctly. + */ +static int bfq_gt(u64 a, u64 b) +{ + return (s64)(a - b) > 0; +} + +static struct bfq_entity *bfq_root_active_entity(struct rb_root *tree) +{ + struct rb_node *node = tree->rb_node; + + return rb_entry(node, struct bfq_entity, rb_node); +} + +static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd, + bool expiration); + +static bool bfq_update_parent_budget(struct bfq_entity *next_in_service); + +/** + * bfq_update_next_in_service - update sd->next_in_service + * @sd: sched_data for which to perform the update. + * @new_entity: if not NULL, pointer to the entity whose activation, + * requeueing or repositionig triggered the invocation of + * this function. + * @expiration: id true, this function is being invoked after the + * expiration of the in-service entity + * + * This function is called to update sd->next_in_service, which, in + * its turn, may change as a consequence of the insertion or + * extraction of an entity into/from one of the active trees of + * sd. These insertions/extractions occur as a consequence of + * activations/deactivations of entities, with some activations being + * 'true' activations, and other activations being requeueings (i.e., + * implementing the second, requeueing phase of the mechanism used to + * reposition an entity in its active tree; see comments on + * __bfq_activate_entity and __bfq_requeue_entity for details). In + * both the last two activation sub-cases, new_entity points to the + * just activated or requeued entity. + * + * Returns true if sd->next_in_service changes in such a way that + * entity->parent may become the next_in_service for its parent + * entity. + */ +static bool bfq_update_next_in_service(struct bfq_sched_data *sd, + struct bfq_entity *new_entity, + bool expiration) +{ + struct bfq_entity *next_in_service = sd->next_in_service; + struct bfq_queue *bfqq; + bool parent_sched_may_change = false; + bool change_without_lookup = false; + + /* + * If this update is triggered by the activation, requeueing + * or repositiong of an entity that does not coincide with + * sd->next_in_service, then a full lookup in the active tree + * can be avoided. In fact, it is enough to check whether the + * just-modified entity has the same priority as + * sd->next_in_service, is eligible and has a lower virtual + * finish time than sd->next_in_service. If this compound + * condition holds, then the new entity becomes the new + * next_in_service. Otherwise no change is needed. + */ + if (new_entity && new_entity != sd->next_in_service) { + /* + * Flag used to decide whether to replace + * sd->next_in_service with new_entity. Tentatively + * set to true, and left as true if + * sd->next_in_service is NULL. + */ + change_without_lookup = true; + + /* + * If there is already a next_in_service candidate + * entity, then compare timestamps to decide whether + * to replace sd->service_tree with new_entity. + */ + if (next_in_service) { + unsigned int new_entity_class_idx = + bfq_class_idx(new_entity); + struct bfq_service_tree *st = + sd->service_tree + new_entity_class_idx; + + change_without_lookup = + (new_entity_class_idx == + bfq_class_idx(next_in_service) + && + !bfq_gt(new_entity->start, st->vtime) + && + bfq_gt(next_in_service->finish, + new_entity->finish)); + } + + if (change_without_lookup) { + next_in_service = new_entity; + bfqq = bfq_entity_to_bfqq(next_in_service); + + if (bfqq) + bfq_log_bfqq(bfqq->bfqd, bfqq, + "update_next_in_service: chose without lookup"); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { + struct bfq_group *bfqg = + container_of(next_in_service, + struct bfq_group, entity); + + bfq_log_bfqg((struct bfq_data*)bfqg->bfqd, bfqg, + "update_next_in_service: chose without lookup"); + } +#endif + } + } + + if (!change_without_lookup) /* lookup needed */ + next_in_service = bfq_lookup_next_entity(sd, expiration); + + if (next_in_service) + parent_sched_may_change = !sd->next_in_service || + bfq_update_parent_budget(next_in_service); + + sd->next_in_service = next_in_service; + + if (!next_in_service) + return parent_sched_may_change; + + bfqq = bfq_entity_to_bfqq(next_in_service); + if (bfqq) + bfq_log_bfqq(bfqq->bfqd, bfqq, + "update_next_in_service: chosen this queue"); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { + struct bfq_group *bfqg = + container_of(next_in_service, + struct bfq_group, entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "update_next_in_service: chosen this entity"); + } +#endif + return parent_sched_may_change; +} + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +/* both next loops stop at one of the child entities of the root group */ +#define for_each_entity(entity) \ + for (; entity ; entity = entity->parent) + +/* + * For each iteration, compute parent in advance, so as to be safe if + * entity is deallocated during the iteration. Such a deallocation may + * happen as a consequence of a bfq_put_queue that frees the bfq_queue + * containing entity. + */ +#define for_each_entity_safe(entity, parent) \ + for (; entity && ({ parent = entity->parent; 1; }); entity = parent) + +/* + * Returns true if this budget changes may let next_in_service->parent + * become the next_in_service entity for its parent entity. + */ +static bool bfq_update_parent_budget(struct bfq_entity *next_in_service) +{ + struct bfq_entity *bfqg_entity; + struct bfq_group *bfqg; + struct bfq_sched_data *group_sd; + bool ret = false; + + BUG_ON(!next_in_service); + + group_sd = next_in_service->sched_data; + + bfqg = container_of(group_sd, struct bfq_group, sched_data); + /* + * bfq_group's my_entity field is not NULL only if the group + * is not the root group. We must not touch the root entity + * as it must never become an in-service entity. + */ + bfqg_entity = bfqg->my_entity; + if (bfqg_entity) { + if (bfqg_entity->budget > next_in_service->budget) + ret = true; + bfqg_entity->budget = next_in_service->budget; + } + + return ret; +} + +/* + * This function tells whether entity stops being a candidate for next + * service, according to the restrictive definition of the field + * next_in_service. In particular, this function is invoked for an + * entity that is about to be set in service. + * + * If entity is a queue, then the entity is no longer a candidate for + * next service according to the that definition, because entity is + * about to become the in-service queue. This function then returns + * true if entity is a queue. + * + * In contrast, entity could still be a candidate for next service if + * it is not a queue, and has more than one active child. In fact, + * even if one of its children is about to be set in service, other + * active children may still be the next to serve, for the parent + * entity, even according to the above definition. As a consequence, a + * non-queue entity is not a candidate for next-service only if it has + * only one active child. And only if this condition holds, then this + * function returns true for a non-queue entity. + */ +static bool bfq_no_longer_next_in_service(struct bfq_entity *entity) +{ + struct bfq_group *bfqg; + + if (bfq_entity_to_bfqq(entity)) + return true; + + bfqg = container_of(entity, struct bfq_group, entity); + + BUG_ON(bfqg == ((struct bfq_data *)(bfqg->bfqd))->root_group); + BUG_ON(bfqg->active_entities == 0); + /* + * The field active_entities does not always contain the + * actual number of active children entities: it happens to + * not account for the in-service entity in case the latter is + * removed from its active tree (which may get done after + * invoking the function bfq_no_longer_next_in_service in + * bfq_get_next_queue). Fortunately, here, i.e., while + * bfq_no_longer_next_in_service is not yet completed in + * bfq_get_next_queue, bfq_active_extract has not yet been + * invoked, and thus active_entities still coincides with the + * actual number of active entities. + */ + if (bfqg->active_entities == 1) + return true; + + return false; +} + +#else /* BFQ_GROUP_IOSCHED_ENABLED */ +#define for_each_entity(entity) \ + for (; entity ; entity = NULL) + +#define for_each_entity_safe(entity, parent) \ + for (parent = NULL; entity ; entity = parent) + +static bool bfq_update_parent_budget(struct bfq_entity *next_in_service) +{ + return false; +} + +static bool bfq_no_longer_next_in_service(struct bfq_entity *entity) +{ + return true; +} + +#endif /* BFQ_GROUP_IOSCHED_ENABLED */ + +/* + * Shift for timestamp calculations. This actually limits the maximum + * service allowed in one timestamp delta (small shift values increase it), + * the maximum total weight that can be used for the queues in the system + * (big shift values increase it), and the period of virtual time + * wraparounds. + */ +#define WFQ_SERVICE_SHIFT 22 + +static struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity) +{ + struct bfq_queue *bfqq = NULL; + + BUG_ON(!entity); + + if (!entity->my_sched_data) + bfqq = container_of(entity, struct bfq_queue, entity); + + return bfqq; +} + + +/** + * bfq_delta - map service into the virtual time domain. + * @service: amount of service. + * @weight: scale factor (weight of an entity or weight sum). + */ +static u64 bfq_delta(unsigned long service, unsigned long weight) +{ + u64 d = (u64)service << WFQ_SERVICE_SHIFT; + + do_div(d, weight); + return d; +} + +/** + * bfq_calc_finish - assign the finish time to an entity. + * @entity: the entity to act upon. + * @service: the service to be charged to the entity. + */ +static void bfq_calc_finish(struct bfq_entity *entity, unsigned long service) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + unsigned long long start, finish, delta; + + BUG_ON(entity->weight == 0); + + entity->finish = entity->start + + bfq_delta(service, entity->weight); + + start = ((entity->start>>10)*1000)>>12; + finish = ((entity->finish>>10)*1000)>>12; + delta = ((bfq_delta(service, entity->weight)>>10)*1000)>>12; + + if (bfqq) { + bfq_log_bfqq(bfqq->bfqd, bfqq, + "calc_finish: serv %lu, w %d", + service, entity->weight); + bfq_log_bfqq(bfqq->bfqd, bfqq, + "calc_finish: start %llu, finish %llu, delta %llu", + start, finish, delta); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + } else { + struct bfq_group *bfqg = + container_of(entity, struct bfq_group, entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "calc_finish group: serv %lu, w %d", + service, entity->weight); + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "calc_finish group: start %llu, finish %llu, delta %llu", + start, finish, delta); +#endif + } +} + +/** + * bfq_entity_of - get an entity from a node. + * @node: the node field of the entity. + * + * Convert a node pointer to the relative entity. This is used only + * to simplify the logic of some functions and not as the generic + * conversion mechanism because, e.g., in the tree walking functions, + * the check for a %NULL value would be redundant. + */ +static struct bfq_entity *bfq_entity_of(struct rb_node *node) +{ + struct bfq_entity *entity = NULL; + + if (node) + entity = rb_entry(node, struct bfq_entity, rb_node); + + return entity; +} + +/** + * bfq_extract - remove an entity from a tree. + * @root: the tree root. + * @entity: the entity to remove. + */ +static void bfq_extract(struct rb_root *root, struct bfq_entity *entity) +{ + BUG_ON(entity->tree != root); + + entity->tree = NULL; + rb_erase(&entity->rb_node, root); +} + +/** + * bfq_idle_extract - extract an entity from the idle tree. + * @st: the service tree of the owning @entity. + * @entity: the entity being removed. + */ +static void bfq_idle_extract(struct bfq_service_tree *st, + struct bfq_entity *entity) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + struct rb_node *next; + + BUG_ON(entity->tree != &st->idle); + + if (entity == st->first_idle) { + next = rb_next(&entity->rb_node); + st->first_idle = bfq_entity_of(next); + } + + if (entity == st->last_idle) { + next = rb_prev(&entity->rb_node); + st->last_idle = bfq_entity_of(next); + } + + bfq_extract(&st->idle, entity); + + if (bfqq) + list_del(&bfqq->bfqq_list); +} + +/** + * bfq_insert - generic tree insertion. + * @root: tree root. + * @entity: entity to insert. + * + * This is used for the idle and the active tree, since they are both + * ordered by finish time. + */ +static void bfq_insert(struct rb_root *root, struct bfq_entity *entity) +{ + struct bfq_entity *entry; + struct rb_node **node = &root->rb_node; + struct rb_node *parent = NULL; + + BUG_ON(entity->tree); + + while (*node) { + parent = *node; + entry = rb_entry(parent, struct bfq_entity, rb_node); + + if (bfq_gt(entry->finish, entity->finish)) + node = &parent->rb_left; + else + node = &parent->rb_right; + } + + rb_link_node(&entity->rb_node, parent, node); + rb_insert_color(&entity->rb_node, root); + + entity->tree = root; +} + +/** + * bfq_update_min - update the min_start field of a entity. + * @entity: the entity to update. + * @node: one of its children. + * + * This function is called when @entity may store an invalid value for + * min_start due to updates to the active tree. The function assumes + * that the subtree rooted at @node (which may be its left or its right + * child) has a valid min_start value. + */ +static void bfq_update_min(struct bfq_entity *entity, struct rb_node *node) +{ + struct bfq_entity *child; + + if (node) { + child = rb_entry(node, struct bfq_entity, rb_node); + if (bfq_gt(entity->min_start, child->min_start)) + entity->min_start = child->min_start; + } +} + +/** + * bfq_update_active_node - recalculate min_start. + * @node: the node to update. + * + * @node may have changed position or one of its children may have moved, + * this function updates its min_start value. The left and right subtrees + * are assumed to hold a correct min_start value. + */ +static void bfq_update_active_node(struct rb_node *node) +{ + struct bfq_entity *entity = rb_entry(node, struct bfq_entity, rb_node); + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + + entity->min_start = entity->start; + bfq_update_min(entity, node->rb_right); + bfq_update_min(entity, node->rb_left); + + if (bfqq) { + bfq_log_bfqq(bfqq->bfqd, bfqq, + "update_active_node: new min_start %llu", + ((entity->min_start>>10)*1000)>>12); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + } else { + struct bfq_group *bfqg = + container_of(entity, struct bfq_group, entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "update_active_node: new min_start %llu", + ((entity->min_start>>10)*1000)>>12); +#endif + } +} + +/** + * bfq_update_active_tree - update min_start for the whole active tree. + * @node: the starting node. + * + * @node must be the deepest modified node after an update. This function + * updates its min_start using the values held by its children, assuming + * that they did not change, and then updates all the nodes that may have + * changed in the path to the root. The only nodes that may have changed + * are the ones in the path or their siblings. + */ +static void bfq_update_active_tree(struct rb_node *node) +{ + struct rb_node *parent; + +up: + bfq_update_active_node(node); + + parent = rb_parent(node); + if (!parent) + return; + + if (node == parent->rb_left && parent->rb_right) + bfq_update_active_node(parent->rb_right); + else if (parent->rb_left) + bfq_update_active_node(parent->rb_left); + + node = parent; + goto up; +} + +static void bfq_weights_tree_add(struct bfq_data *bfqd, + struct bfq_entity *entity, + struct rb_root *root); + +static void bfq_weights_tree_remove(struct bfq_data *bfqd, + struct bfq_entity *entity, + struct rb_root *root); + + +/** + * bfq_active_insert - insert an entity in the active tree of its + * group/device. + * @st: the service tree of the entity. + * @entity: the entity being inserted. + * + * The active tree is ordered by finish time, but an extra key is kept + * per each node, containing the minimum value for the start times of + * its children (and the node itself), so it's possible to search for + * the eligible node with the lowest finish time in logarithmic time. + */ +static void bfq_active_insert(struct bfq_service_tree *st, + struct bfq_entity *entity) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + struct rb_node *node = &entity->rb_node; +#ifdef BFQ_GROUP_IOSCHED_ENABLED + struct bfq_sched_data *sd = NULL; + struct bfq_group *bfqg = NULL; + struct bfq_data *bfqd = NULL; +#endif + + bfq_insert(&st->active, entity); + + if (node->rb_left) + node = node->rb_left; + else if (node->rb_right) + node = node->rb_right; + + bfq_update_active_tree(node); + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + sd = entity->sched_data; + bfqg = container_of(sd, struct bfq_group, sched_data); + BUG_ON(!bfqg); + bfqd = (struct bfq_data *)bfqg->bfqd; +#endif + if (bfqq) + list_add(&bfqq->bfqq_list, &bfqq->bfqd->active_list); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { /* bfq_group */ + BUG_ON(!bfqd); + bfq_weights_tree_add(bfqd, entity, &bfqd->group_weights_tree); + } + if (bfqg != bfqd->root_group) { + BUG_ON(!bfqg); + BUG_ON(!bfqd); + bfqg->active_entities++; + } +#endif +} + +/** + * bfq_ioprio_to_weight - calc a weight from an ioprio. + * @ioprio: the ioprio value to convert. + */ +static unsigned short bfq_ioprio_to_weight(int ioprio) +{ + BUG_ON(ioprio < 0 || ioprio >= IOPRIO_BE_NR); + return (IOPRIO_BE_NR - ioprio) * BFQ_WEIGHT_CONVERSION_COEFF; +} + +/** + * bfq_weight_to_ioprio - calc an ioprio from a weight. + * @weight: the weight value to convert. + * + * To preserve as much as possible the old only-ioprio user interface, + * 0 is used as an escape ioprio value for weights (numerically) equal or + * larger than IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF. + */ +static unsigned short bfq_weight_to_ioprio(int weight) +{ + BUG_ON(weight < BFQ_MIN_WEIGHT || weight > BFQ_MAX_WEIGHT); + return IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF - weight < 0 ? + 0 : IOPRIO_BE_NR * BFQ_WEIGHT_CONVERSION_COEFF - weight; +} + +static void bfq_get_entity(struct bfq_entity *entity) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + + if (bfqq) { + bfqq->ref++; + bfq_log_bfqq(bfqq->bfqd, bfqq, "get_entity: %p %d", + bfqq, bfqq->ref); + } +} + +/** + * bfq_find_deepest - find the deepest node that an extraction can modify. + * @node: the node being removed. + * + * Do the first step of an extraction in an rb tree, looking for the + * node that will replace @node, and returning the deepest node that + * the following modifications to the tree can touch. If @node is the + * last node in the tree return %NULL. + */ +static struct rb_node *bfq_find_deepest(struct rb_node *node) +{ + struct rb_node *deepest; + + if (!node->rb_right && !node->rb_left) + deepest = rb_parent(node); + else if (!node->rb_right) + deepest = node->rb_left; + else if (!node->rb_left) + deepest = node->rb_right; + else { + deepest = rb_next(node); + if (deepest->rb_right) + deepest = deepest->rb_right; + else if (rb_parent(deepest) != node) + deepest = rb_parent(deepest); + } + + return deepest; +} + +/** + * bfq_active_extract - remove an entity from the active tree. + * @st: the service_tree containing the tree. + * @entity: the entity being removed. + */ +static void bfq_active_extract(struct bfq_service_tree *st, + struct bfq_entity *entity) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + struct rb_node *node; +#ifdef BFQ_GROUP_IOSCHED_ENABLED + struct bfq_sched_data *sd = NULL; + struct bfq_group *bfqg = NULL; + struct bfq_data *bfqd = NULL; +#endif + + node = bfq_find_deepest(&entity->rb_node); + bfq_extract(&st->active, entity); + + if (node) + bfq_update_active_tree(node); + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + sd = entity->sched_data; + bfqg = container_of(sd, struct bfq_group, sched_data); + BUG_ON(!bfqg); + bfqd = (struct bfq_data *)bfqg->bfqd; +#endif + if (bfqq) + list_del(&bfqq->bfqq_list); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { /* bfq_group */ + BUG_ON(!bfqd); + bfq_weights_tree_remove(bfqd, entity, + &bfqd->group_weights_tree); + } + if (bfqg != bfqd->root_group) { + BUG_ON(!bfqg); + BUG_ON(!bfqd); + BUG_ON(!bfqg->active_entities); + bfqg->active_entities--; + } +#endif +} + +/** + * bfq_idle_insert - insert an entity into the idle tree. + * @st: the service tree containing the tree. + * @entity: the entity to insert. + */ +static void bfq_idle_insert(struct bfq_service_tree *st, + struct bfq_entity *entity) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + struct bfq_entity *first_idle = st->first_idle; + struct bfq_entity *last_idle = st->last_idle; + + if (!first_idle || bfq_gt(first_idle->finish, entity->finish)) + st->first_idle = entity; + if (!last_idle || bfq_gt(entity->finish, last_idle->finish)) + st->last_idle = entity; + + bfq_insert(&st->idle, entity); + + if (bfqq) + list_add(&bfqq->bfqq_list, &bfqq->bfqd->idle_list); +} + +/** + * bfq_forget_entity - do not consider entity any longer for scheduling + * @st: the service tree. + * @entity: the entity being removed. + * @is_in_service: true if entity is currently the in-service entity. + * + * Forget everything about @entity. In addition, if entity represents + * a queue, and the latter is not in service, then release the service + * reference to the queue (the one taken through bfq_get_entity). In + * fact, in this case, there is really no more service reference to + * the queue, as the latter is also outside any service tree. If, + * instead, the queue is in service, then __bfq_bfqd_reset_in_service + * will take care of putting the reference when the queue finally + * stops being served. + */ +static void bfq_forget_entity(struct bfq_service_tree *st, + struct bfq_entity *entity, + bool is_in_service) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + BUG_ON(!entity->on_st); + + entity->on_st = false; + st->wsum -= entity->weight; + if (bfqq && !is_in_service) { + bfq_log_bfqq(bfqq->bfqd, bfqq, "forget_entity (before): %p %d", + bfqq, bfqq->ref); + bfq_put_queue(bfqq); + } +} + +/** + * bfq_put_idle_entity - release the idle tree ref of an entity. + * @st: service tree for the entity. + * @entity: the entity being released. + */ +static void bfq_put_idle_entity(struct bfq_service_tree *st, + struct bfq_entity *entity) +{ + bfq_idle_extract(st, entity); + bfq_forget_entity(st, entity, + entity == entity->sched_data->in_service_entity); +} + +/** + * bfq_forget_idle - update the idle tree if necessary. + * @st: the service tree to act upon. + * + * To preserve the global O(log N) complexity we only remove one entry here; + * as the idle tree will not grow indefinitely this can be done safely. + */ +static void bfq_forget_idle(struct bfq_service_tree *st) +{ + struct bfq_entity *first_idle = st->first_idle; + struct bfq_entity *last_idle = st->last_idle; + + if (RB_EMPTY_ROOT(&st->active) && last_idle && + !bfq_gt(last_idle->finish, st->vtime)) { + /* + * Forget the whole idle tree, increasing the vtime past + * the last finish time of idle entities. + */ + st->vtime = last_idle->finish; + } + + if (first_idle && !bfq_gt(first_idle->finish, st->vtime)) + bfq_put_idle_entity(st, first_idle); +} + +/* + * Update weight and priority of entity. If update_class_too is true, + * then update the ioprio_class of entity too. + * + * The reason why the update of ioprio_class is controlled through the + * last parameter is as follows. Changing the ioprio class of an + * entity implies changing the destination service trees for that + * entity. If such a change occurred when the entity is already on one + * of the service trees for its previous class, then the state of the + * entity would become more complex: none of the new possible service + * trees for the entity, according to bfq_entity_service_tree(), would + * match any of the possible service trees on which the entity + * is. Complex operations involving these trees, such as entity + * activations and deactivations, should take into account this + * additional complexity. To avoid this issue, this function is + * invoked with update_class_too unset in the points in the code where + * entity may happen to be on some tree. + */ +static struct bfq_service_tree * +__bfq_entity_update_weight_prio(struct bfq_service_tree *old_st, + struct bfq_entity *entity, + bool update_class_too) +{ + struct bfq_service_tree *new_st = old_st; + + if (entity->prio_changed) { + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + unsigned int prev_weight, new_weight; + struct bfq_data *bfqd = NULL; + struct rb_root *root; +#ifdef BFQ_GROUP_IOSCHED_ENABLED + struct bfq_sched_data *sd; + struct bfq_group *bfqg; +#endif + + if (bfqq) + bfqd = bfqq->bfqd; +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { + sd = entity->my_sched_data; + bfqg = container_of(sd, struct bfq_group, sched_data); + BUG_ON(!bfqg); + bfqd = (struct bfq_data *)bfqg->bfqd; + BUG_ON(!bfqd); + } +#endif + + BUG_ON(entity->tree && update_class_too); + BUG_ON(old_st->wsum < entity->weight); + old_st->wsum -= entity->weight; + + if (entity->new_weight != entity->orig_weight) { + if (entity->new_weight < BFQ_MIN_WEIGHT || + entity->new_weight > BFQ_MAX_WEIGHT) { + pr_crit("update_weight_prio: new_weight %d\n", + entity->new_weight); + if (entity->new_weight < BFQ_MIN_WEIGHT) + entity->new_weight = BFQ_MIN_WEIGHT; + else + entity->new_weight = BFQ_MAX_WEIGHT; + } + entity->orig_weight = entity->new_weight; + if (bfqq) + bfqq->ioprio = + bfq_weight_to_ioprio(entity->orig_weight); + } + + if (bfqq && update_class_too) + bfqq->ioprio_class = bfqq->new_ioprio_class; + + /* + * Reset prio_changed only if the ioprio_class change + * is not pending any longer. + */ + if (!bfqq || bfqq->ioprio_class == bfqq->new_ioprio_class) + entity->prio_changed = 0; + + /* + * NOTE: here we may be changing the weight too early, + * this will cause unfairness. The correct approach + * would have required additional complexity to defer + * weight changes to the proper time instants (i.e., + * when entity->finish <= old_st->vtime). + */ + new_st = bfq_entity_service_tree(entity); + + prev_weight = entity->weight; + new_weight = entity->orig_weight * + (bfqq ? bfqq->wr_coeff : 1); + /* + * If the weight of the entity changes, remove the entity + * from its old weight counter (if there is a counter + * associated with the entity), and add it to the counter + * associated with its new weight. + */ + if (prev_weight != new_weight) { + if (bfqq) + bfq_log_bfqq(bfqq->bfqd, bfqq, + "weight changed %d %d(%d %d)", + prev_weight, new_weight, + entity->orig_weight, + bfqq->wr_coeff); + + root = bfqq ? &bfqd->queue_weights_tree : + &bfqd->group_weights_tree; + bfq_weights_tree_remove(bfqd, entity, root); + } + entity->weight = new_weight; + /* + * Add the entity to its weights tree only if it is + * not associated with a weight-raised queue. + */ + if (prev_weight != new_weight && + (bfqq ? bfqq->wr_coeff == 1 : 1)) + /* If we get here, root has been initialized. */ + bfq_weights_tree_add(bfqd, entity, root); + + new_st->wsum += entity->weight; + + if (new_st != old_st) { + BUG_ON(!update_class_too); + entity->start = new_st->vtime; + } + } + + return new_st; +} + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +static void bfqg_stats_set_start_empty_time(struct bfq_group *bfqg); +#endif + +/** + * bfq_bfqq_served - update the scheduler status after selection for + * service. + * @bfqq: the queue being served. + * @served: bytes to transfer. + * + * NOTE: this can be optimized, as the timestamps of upper level entities + * are synchronized every time a new bfqq is selected for service. By now, + * we keep it to better check consistency. + */ +static void bfq_bfqq_served(struct bfq_queue *bfqq, int served) +{ + struct bfq_entity *entity = &bfqq->entity; + struct bfq_service_tree *st; + + for_each_entity(entity) { + st = bfq_entity_service_tree(entity); + + entity->service += served; + + BUG_ON(st->wsum == 0); + + st->vtime += bfq_delta(served, st->wsum); + bfq_forget_idle(st); + } +#ifndef BFQ_MQ +#ifdef BFQ_GROUP_IOSCHED_ENABLED + bfqg_stats_set_start_empty_time(bfqq_group(bfqq)); +#endif +#endif + st = bfq_entity_service_tree(&bfqq->entity); + bfq_log_bfqq(bfqq->bfqd, bfqq, "bfqq_served %d secs, vtime %llu on %p", + served, ((st->vtime>>10)*1000)>>12, st); +} + +/** + * bfq_bfqq_charge_time - charge an amount of service equivalent to the length + * of the time interval during which bfqq has been in + * service. + * @bfqd: the device + * @bfqq: the queue that needs a service update. + * @time_ms: the amount of time during which the queue has received service + * + * If a queue does not consume its budget fast enough, then providing + * the queue with service fairness may impair throughput, more or less + * severely. For this reason, queues that consume their budget slowly + * are provided with time fairness instead of service fairness. This + * goal is achieved through the BFQ scheduling engine, even if such an + * engine works in the service, and not in the time domain. The trick + * is charging these queues with an inflated amount of service, equal + * to the amount of service that they would have received during their + * service slot if they had been fast, i.e., if their requests had + * been dispatched at a rate equal to the estimated peak rate. + * + * It is worth noting that time fairness can cause important + * distortions in terms of bandwidth distribution, on devices with + * internal queueing. The reason is that I/O requests dispatched + * during the service slot of a queue may be served after that service + * slot is finished, and may have a total processing time loosely + * correlated with the duration of the service slot. This is + * especially true for short service slots. + */ +static void bfq_bfqq_charge_time(struct bfq_data *bfqd, struct bfq_queue *bfqq, + unsigned long time_ms) +{ + struct bfq_entity *entity = &bfqq->entity; + int tot_serv_to_charge = entity->service; + unsigned int timeout_ms = jiffies_to_msecs(bfq_timeout); + + if (time_ms > 0 && time_ms < timeout_ms) + tot_serv_to_charge = + (bfqd->bfq_max_budget * time_ms) / timeout_ms; + + if (tot_serv_to_charge < entity->service) + tot_serv_to_charge = entity->service; + + bfq_log_bfqq(bfqq->bfqd, bfqq, + "charge_time: %lu/%u ms, %d/%d/%d sectors", + time_ms, timeout_ms, entity->service, + tot_serv_to_charge, entity->budget); + + /* Increase budget to avoid inconsistencies */ + if (tot_serv_to_charge > entity->budget) + entity->budget = tot_serv_to_charge; + + bfq_bfqq_served(bfqq, + max_t(int, 0, tot_serv_to_charge - entity->service)); +} + +static void bfq_update_fin_time_enqueue(struct bfq_entity *entity, + struct bfq_service_tree *st, + bool backshifted) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + struct bfq_sched_data *sd = entity->sched_data; + + /* + * When this function is invoked, entity is not in any service + * tree, then it is safe to invoke next function with the last + * parameter set (see the comments on the function). + */ + BUG_ON(entity->tree); + st = __bfq_entity_update_weight_prio(st, entity, true); + bfq_calc_finish(entity, entity->budget); + + /* + * If some queues enjoy backshifting for a while, then their + * (virtual) finish timestamps may happen to become lower and + * lower than the system virtual time. In particular, if + * these queues often happen to be idle for short time + * periods, and during such time periods other queues with + * higher timestamps happen to be busy, then the backshifted + * timestamps of the former queues can become much lower than + * the system virtual time. In fact, to serve the queues with + * higher timestamps while the ones with lower timestamps are + * idle, the system virtual time may be pushed-up to much + * higher values than the finish timestamps of the idle + * queues. As a consequence, the finish timestamps of all new + * or newly activated queues may end up being much larger than + * those of lucky queues with backshifted timestamps. The + * latter queues may then monopolize the device for a lot of + * time. This would simply break service guarantees. + * + * To reduce this problem, push up a little bit the + * backshifted timestamps of the queue associated with this + * entity (only a queue can happen to have the backshifted + * flag set): just enough to let the finish timestamp of the + * queue be equal to the current value of the system virtual + * time. This may introduce a little unfairness among queues + * with backshifted timestamps, but it does not break + * worst-case fairness guarantees. + * + * As a special case, if bfqq is weight-raised, push up + * timestamps much less, to keep very low the probability that + * this push up causes the backshifted finish timestamps of + * weight-raised queues to become higher than the backshifted + * finish timestamps of non weight-raised queues. + */ + if (backshifted && bfq_gt(st->vtime, entity->finish)) { + unsigned long delta = st->vtime - entity->finish; + + if (bfqq) + delta /= bfqq->wr_coeff; + + entity->start += delta; + entity->finish += delta; + + if (bfqq) { + bfq_log_bfqq(bfqq->bfqd, bfqq, + "update_fin_time_enqueue: new queue finish %llu", + ((entity->finish>>10)*1000)>>12); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + } else { + struct bfq_group *bfqg = + container_of(entity, struct bfq_group, entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "update_fin_time_enqueue: new group finish %llu", + ((entity->finish>>10)*1000)>>12); +#endif + } + } + + bfq_active_insert(st, entity); + + if (bfqq) { + bfq_log_bfqq(bfqq->bfqd, bfqq, + "update_fin_time_enqueue: queue %seligible in st %p", + entity->start <= st->vtime ? "" : "non ", st); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + } else { + struct bfq_group *bfqg = + container_of(entity, struct bfq_group, entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "update_fin_time_enqueue: group %seligible in st %p", + entity->start <= st->vtime ? "" : "non ", st); +#endif + } + BUG_ON(RB_EMPTY_ROOT(&st->active)); + BUG_ON(&st->active != &sd->service_tree->active && + &st->active != &(sd->service_tree+1)->active && + &st->active != &(sd->service_tree+2)->active); +} + +/** + * __bfq_activate_entity - handle activation of entity. + * @entity: the entity being activated. + * @non_blocking_wait_rq: true if entity was waiting for a request + * + * Called for a 'true' activation, i.e., if entity is not active and + * one of its children receives a new request. + * + * Basically, this function updates the timestamps of entity and + * inserts entity into its active tree, ater possibly extracting it + * from its idle tree. + */ +static void __bfq_activate_entity(struct bfq_entity *entity, + bool non_blocking_wait_rq) +{ + struct bfq_sched_data *sd = entity->sched_data; + struct bfq_service_tree *st = bfq_entity_service_tree(entity); + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + bool backshifted = false; + unsigned long long min_vstart; + + BUG_ON(!sd); + BUG_ON(!st); + + /* See comments on bfq_fqq_update_budg_for_activation */ + if (non_blocking_wait_rq && bfq_gt(st->vtime, entity->finish)) { + backshifted = true; + min_vstart = entity->finish; + } else + min_vstart = st->vtime; + + if (entity->tree == &st->idle) { + /* + * Must be on the idle tree, bfq_idle_extract() will + * check for that. + */ + bfq_idle_extract(st, entity); + BUG_ON(entity->tree); + entity->start = bfq_gt(min_vstart, entity->finish) ? + min_vstart : entity->finish; + } else { + BUG_ON(entity->tree); + /* + * The finish time of the entity may be invalid, and + * it is in the past for sure, otherwise the queue + * would have been on the idle tree. + */ + entity->start = min_vstart; + st->wsum += entity->weight; + /* + * entity is about to be inserted into a service tree, + * and then set in service: get a reference to make + * sure entity does not disappear until it is no + * longer in service or scheduled for service. + */ + bfq_get_entity(entity); + + BUG_ON(entity->on_st && bfqq); + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + if (entity->on_st && !bfqq) { + struct bfq_group *bfqg = + container_of(entity, struct bfq_group, + entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, + bfqg, + "activate bug, class %d in_service %p", + bfq_class_idx(entity), sd->in_service_entity); + } +#endif + BUG_ON(entity->on_st && !bfqq); + entity->on_st = true; + } + + bfq_update_fin_time_enqueue(entity, st, backshifted); +} + +/** + * __bfq_requeue_entity - handle requeueing or repositioning of an entity. + * @entity: the entity being requeued or repositioned. + * + * Requeueing is needed if this entity stops being served, which + * happens if a leaf descendant entity has expired. On the other hand, + * repositioning is needed if the next_inservice_entity for the child + * entity has changed. See the comments inside the function for + * details. + * + * Basically, this function: 1) removes entity from its active tree if + * present there, 2) updates the timestamps of entity and 3) inserts + * entity back into its active tree (in the new, right position for + * the new values of the timestamps). + */ +static void __bfq_requeue_entity(struct bfq_entity *entity) +{ + struct bfq_sched_data *sd = entity->sched_data; + struct bfq_service_tree *st = bfq_entity_service_tree(entity); + + BUG_ON(!sd); + BUG_ON(!st); + + BUG_ON(entity != sd->in_service_entity && + entity->tree != &st->active); + + if (entity == sd->in_service_entity) { + /* + * We are requeueing the current in-service entity, + * which may have to be done for one of the following + * reasons: + * - entity represents the in-service queue, and the + * in-service queue is being requeued after an + * expiration; + * - entity represents a group, and its budget has + * changed because one of its child entities has + * just been either activated or requeued for some + * reason; the timestamps of the entity need then to + * be updated, and the entity needs to be enqueued + * or repositioned accordingly. + * + * In particular, before requeueing, the start time of + * the entity must be moved forward to account for the + * service that the entity has received while in + * service. This is done by the next instructions. The + * finish time will then be updated according to this + * new value of the start time, and to the budget of + * the entity. + */ + bfq_calc_finish(entity, entity->service); + entity->start = entity->finish; + BUG_ON(entity->tree && entity->tree == &st->idle); + BUG_ON(entity->tree && entity->tree != &st->active); + /* + * In addition, if the entity had more than one child + * when set in service, then it was not extracted from + * the active tree. This implies that the position of + * the entity in the active tree may need to be + * changed now, because we have just updated the start + * time of the entity, and we will update its finish + * time in a moment (the requeueing is then, more + * precisely, a repositioning in this case). To + * implement this repositioning, we: 1) dequeue the + * entity here, 2) update the finish time and requeue + * the entity according to the new timestamps below. + */ + if (entity->tree) + bfq_active_extract(st, entity); + } else { /* The entity is already active, and not in service */ + /* + * In this case, this function gets called only if the + * next_in_service entity below this entity has + * changed, and this change has caused the budget of + * this entity to change, which, finally implies that + * the finish time of this entity must be + * updated. Such an update may cause the scheduling, + * i.e., the position in the active tree, of this + * entity to change. We handle this change by: 1) + * dequeueing the entity here, 2) updating the finish + * time and requeueing the entity according to the new + * timestamps below. This is the same approach as the + * non-extracted-entity sub-case above. + */ + bfq_active_extract(st, entity); + } + + bfq_update_fin_time_enqueue(entity, st, false); +} + +static void __bfq_activate_requeue_entity(struct bfq_entity *entity, + struct bfq_sched_data *sd, + bool non_blocking_wait_rq) +{ + struct bfq_service_tree *st = bfq_entity_service_tree(entity); + + if (sd->in_service_entity == entity || entity->tree == &st->active) + /* + * in service or already queued on the active tree, + * requeue or reposition + */ + __bfq_requeue_entity(entity); + else + /* + * Not in service and not queued on its active tree: + * the activity is idle and this is a true activation. + */ + __bfq_activate_entity(entity, non_blocking_wait_rq); +} + + +/** + * bfq_activate_requeue_entity - activate or requeue an entity representing a bfq_queue, + * and activate, requeue or reposition all ancestors + * for which such an update becomes necessary. + * @entity: the entity to activate. + * @non_blocking_wait_rq: true if this entity was waiting for a request + * @requeue: true if this is a requeue, which implies that bfqq is + * being expired; thus ALL its ancestors stop being served and must + * therefore be requeued + * @expiration: true if this function is being invoked in the expiration path + * of the in-service queue + */ +static void bfq_activate_requeue_entity(struct bfq_entity *entity, + bool non_blocking_wait_rq, + bool requeue, bool expiration) +{ + struct bfq_sched_data *sd; + + for_each_entity(entity) { + BUG_ON(!entity); + sd = entity->sched_data; + __bfq_activate_requeue_entity(entity, sd, non_blocking_wait_rq); + + BUG_ON(RB_EMPTY_ROOT(&sd->service_tree->active) && + RB_EMPTY_ROOT(&(sd->service_tree+1)->active) && + RB_EMPTY_ROOT(&(sd->service_tree+2)->active)); + + if (!bfq_update_next_in_service(sd, entity, expiration) && + !requeue) { + BUG_ON(!sd->next_in_service); + break; + } + BUG_ON(!sd->next_in_service); + } +} + +/** + * __bfq_deactivate_entity - deactivate an entity from its service tree. + * @entity: the entity to deactivate. + * @ins_into_idle_tree: if false, the entity will not be put into the + * idle tree. + * + * Deactivates an entity, independently of its previous state. Must + * be invoked only if entity is on a service tree. Extracts the entity + * from that tree, and if necessary and allowed, puts it into the idle + * tree. + */ +static bool __bfq_deactivate_entity(struct bfq_entity *entity, + bool ins_into_idle_tree) +{ + struct bfq_sched_data *sd = entity->sched_data; + struct bfq_service_tree *st; + bool is_in_service; + + if (!entity->on_st) { /* entity never activated, or already inactive */ + BUG_ON(sd && entity == sd->in_service_entity); + return false; + } + + /* + * If we get here, then entity is active, which implies that + * bfq_group_set_parent has already been invoked for the group + * represented by entity. Therefore, the field + * entity->sched_data has been set, and we can safely use it. + */ + st = bfq_entity_service_tree(entity); + is_in_service = entity == sd->in_service_entity; + + BUG_ON(is_in_service && entity->tree && entity->tree != &st->active); + + if (is_in_service) { + bfq_calc_finish(entity, entity->service); + sd->in_service_entity = NULL; + } + + if (entity->tree == &st->active) + bfq_active_extract(st, entity); + else if (!is_in_service && entity->tree == &st->idle) + bfq_idle_extract(st, entity); + else if (entity->tree) + BUG(); + + if (!ins_into_idle_tree || !bfq_gt(entity->finish, st->vtime)) + bfq_forget_entity(st, entity, is_in_service); + else + bfq_idle_insert(st, entity); + + return true; +} + +/** + * bfq_deactivate_entity - deactivate an entity representing a bfq_queue. + * @entity: the entity to deactivate. + * @ins_into_idle_tree: true if the entity can be put into the idle tree + * @expiration: true if this function is being invoked in the expiration path + * of the in-service queue + */ +static void bfq_deactivate_entity(struct bfq_entity *entity, + bool ins_into_idle_tree, + bool expiration) +{ + struct bfq_sched_data *sd; + struct bfq_entity *parent = NULL; + + for_each_entity_safe(entity, parent) { + sd = entity->sched_data; + + BUG_ON(sd == NULL); /* + * It would mean that this is the + * root group. + */ + + BUG_ON(expiration && entity != sd->in_service_entity); + + BUG_ON(entity != sd->in_service_entity && + entity->tree == + &bfq_entity_service_tree(entity)->active && + !sd->next_in_service); + + if (!__bfq_deactivate_entity(entity, ins_into_idle_tree)) { + /* + * entity is not in any tree any more, so + * this deactivation is a no-op, and there is + * nothing to change for upper-level entities + * (in case of expiration, this can never + * happen). + */ + BUG_ON(expiration); /* + * entity cannot be already out of + * any tree + */ + return; + } + + if (sd->next_in_service == entity) + /* + * entity was the next_in_service entity, + * then, since entity has just been + * deactivated, a new one must be found. + */ + bfq_update_next_in_service(sd, NULL, expiration); + + if (sd->next_in_service || sd->in_service_entity) { + /* + * The parent entity is still active, because + * either next_in_service or in_service_entity + * is not NULL. So, no further upwards + * deactivation must be performed. Yet, + * next_in_service has changed. Then the + * schedule does need to be updated upwards. + * + * NOTE If in_service_entity is not NULL, then + * next_in_service may happen to be NULL, + * although the parent entity is evidently + * active. This happens if 1) the entity + * pointed by in_service_entity is the only + * active entity in the parent entity, and 2) + * according to the definition of + * next_in_service, the in_service_entity + * cannot be considered as + * next_in_service. See the comments on the + * definition of next_in_service for details. + */ + BUG_ON(sd->next_in_service == entity); + BUG_ON(sd->in_service_entity == entity); + break; + } + + /* + * If we get here, then the parent is no more + * backlogged and we need to propagate the + * deactivation upwards. Thus let the loop go on. + */ + + /* + * Also let parent be queued into the idle tree on + * deactivation, to preserve service guarantees, and + * assuming that who invoked this function does not + * need parent entities too to be removed completely. + */ + ins_into_idle_tree = true; + } + + /* + * If the deactivation loop is fully executed, then there are + * no more entities to touch and next loop is not executed at + * all. Otherwise, requeue remaining entities if they are + * about to stop receiving service, or reposition them if this + * is not the case. + */ + entity = parent; + for_each_entity(entity) { + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + + /* + * Invoke __bfq_requeue_entity on entity, even if + * already active, to requeue/reposition it in the + * active tree (because sd->next_in_service has + * changed) + */ + __bfq_requeue_entity(entity); + + sd = entity->sched_data; + BUG_ON(expiration && sd->in_service_entity != entity); + + if (bfqq) + bfq_log_bfqq(bfqq->bfqd, bfqq, + "invoking udpdate_next for this queue"); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { + struct bfq_group *bfqg = + container_of(entity, + struct bfq_group, entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "invoking udpdate_next for this entity"); + } +#endif + if (!bfq_update_next_in_service(sd, entity, expiration) && + !expiration) + /* + * next_in_service unchanged or not causing + * any change in entity->parent->sd, and no + * requeueing needed for expiration: stop + * here. + */ + break; + } +} + +/** + * bfq_calc_vtime_jump - compute the value to which the vtime should jump, + * if needed, to have at least one entity eligible. + * @st: the service tree to act upon. + * + * Assumes that st is not empty. + */ +static u64 bfq_calc_vtime_jump(struct bfq_service_tree *st) +{ + struct bfq_entity *root_entity = bfq_root_active_entity(&st->active); + + if (bfq_gt(root_entity->min_start, st->vtime)) { + struct bfq_queue *bfqq = bfq_entity_to_bfqq(root_entity); + + if (bfqq) + bfq_log_bfqq(bfqq->bfqd, bfqq, + "calc_vtime_jump: new value %llu", + ((root_entity->min_start>>10)*1000)>>12); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { + struct bfq_group *bfqg = + container_of(root_entity, struct bfq_group, + entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "calc_vtime_jump: new value %llu", + ((root_entity->min_start>>10)*1000)>>12); + } +#endif + return root_entity->min_start; + } + return st->vtime; +} + +static void bfq_update_vtime(struct bfq_service_tree *st, u64 new_value) +{ + if (new_value > st->vtime) { + st->vtime = new_value; + bfq_forget_idle(st); + } +} + +/** + * bfq_first_active_entity - find the eligible entity with + * the smallest finish time + * @st: the service tree to select from. + * @vtime: the system virtual to use as a reference for eligibility + * + * This function searches the first schedulable entity, starting from the + * root of the tree and going on the left every time on this side there is + * a subtree with at least one eligible (start >= vtime) entity. The path on + * the right is followed only if a) the left subtree contains no eligible + * entities and b) no eligible entity has been found yet. + */ +static struct bfq_entity *bfq_first_active_entity(struct bfq_service_tree *st, + u64 vtime) +{ + struct bfq_entity *entry, *first = NULL; + struct rb_node *node = st->active.rb_node; + + while (node) { + entry = rb_entry(node, struct bfq_entity, rb_node); +left: + if (!bfq_gt(entry->start, vtime)) + first = entry; + + BUG_ON(bfq_gt(entry->min_start, vtime)); + + if (node->rb_left) { + entry = rb_entry(node->rb_left, + struct bfq_entity, rb_node); + if (!bfq_gt(entry->min_start, vtime)) { + node = node->rb_left; + goto left; + } + } + if (first) + break; + node = node->rb_right; + } + + BUG_ON(!first && !RB_EMPTY_ROOT(&st->active)); + return first; +} + +/** + * __bfq_lookup_next_entity - return the first eligible entity in @st. + * @st: the service tree. + * + * If there is no in-service entity for the sched_data st belongs to, + * then return the entity that will be set in service if: + * 1) the parent entity this st belongs to is set in service; + * 2) no entity belonging to such parent entity undergoes a state change + * that would influence the timestamps of the entity (e.g., becomes idle, + * becomes backlogged, changes its budget, ...). + * + * In this first case, update the virtual time in @st too (see the + * comments on this update inside the function). + * + * In constrast, if there is an in-service entity, then return the + * entity that would be set in service if not only the above + * conditions, but also the next one held true: the currently + * in-service entity, on expiration, + * 1) gets a finish time equal to the current one, or + * 2) is not eligible any more, or + * 3) is idle. + */ +static struct bfq_entity * +__bfq_lookup_next_entity(struct bfq_service_tree *st, bool in_service) +{ + struct bfq_entity *entity; + u64 new_vtime; + struct bfq_queue *bfqq; + + if (RB_EMPTY_ROOT(&st->active)) + return NULL; + + /* + * Get the value of the system virtual time for which at + * least one entity is eligible. + */ + new_vtime = bfq_calc_vtime_jump(st); + + /* + * If there is no in-service entity for the sched_data this + * active tree belongs to, then push the system virtual time + * up to the value that guarantees that at least one entity is + * eligible. If, instead, there is an in-service entity, then + * do not make any such update, because there is already an + * eligible entity, namely the in-service one (even if the + * entity is not on st, because it was extracted when set in + * service). + */ + if (!in_service) + bfq_update_vtime(st, new_vtime); + + entity = bfq_first_active_entity(st, new_vtime); + BUG_ON(bfq_gt(entity->start, new_vtime)); + + /* Log some information */ + bfqq = bfq_entity_to_bfqq(entity); + if (bfqq) + bfq_log_bfqq(bfqq->bfqd, bfqq, + "__lookup_next: start %llu vtime %llu st %p", + ((entity->start>>10)*1000)>>12, + ((new_vtime>>10)*1000)>>12, st); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { + struct bfq_group *bfqg = + container_of(entity, struct bfq_group, entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "__lookup_next: start %llu vtime %llu (%llu) st %p", + ((entity->start>>10)*1000)>>12, + ((st->vtime>>10)*1000)>>12, + ((new_vtime>>10)*1000)>>12, st); + } +#endif + + BUG_ON(!entity); + + return entity; +} + +/** + * bfq_lookup_next_entity - return the first eligible entity in @sd. + * @sd: the sched_data. + * @expiration: true if we are on the expiration path of the in-service queue + * + * This function is invoked when there has been a change in the trees + * for sd, and we need to know what is the new next entity to serve + * after this change. + */ +static struct bfq_entity *bfq_lookup_next_entity(struct bfq_sched_data *sd, + bool expiration) +{ + struct bfq_service_tree *st = sd->service_tree; + struct bfq_service_tree *idle_class_st = st + (BFQ_IOPRIO_CLASSES - 1); + struct bfq_entity *entity = NULL; + struct bfq_queue *bfqq; + int class_idx = 0; + + BUG_ON(!sd); + BUG_ON(!st); + /* + * Choose from idle class, if needed to guarantee a minimum + * bandwidth to this class (and if there is some active entity + * in idle class). This should also mitigate + * priority-inversion problems in case a low priority task is + * holding file system resources. + */ + if (time_is_before_jiffies(sd->bfq_class_idle_last_service + + BFQ_CL_IDLE_TIMEOUT)) { + if (!RB_EMPTY_ROOT(&idle_class_st->active)) + class_idx = BFQ_IOPRIO_CLASSES - 1; + /* About to be served if backlogged, or not yet backlogged */ + sd->bfq_class_idle_last_service = jiffies; + } + + /* + * Find the next entity to serve for the highest-priority + * class, unless the idle class needs to be served. + */ + for (; class_idx < BFQ_IOPRIO_CLASSES; class_idx++) { + /* + * If expiration is true, then bfq_lookup_next_entity + * is being invoked as a part of the expiration path + * of the in-service queue. In this case, even if + * sd->in_service_entity is not NULL, + * sd->in_service_entiy at this point is actually not + * in service any more, and, if needed, has already + * been properly queued or requeued into the right + * tree. The reason why sd->in_service_entity is still + * not NULL here, even if expiration is true, is that + * sd->in_service_entiy is reset as a last step in the + * expiration path. So, if expiration is true, tell + * __bfq_lookup_next_entity that there is no + * sd->in_service_entity. + */ + entity = __bfq_lookup_next_entity(st + class_idx, + sd->in_service_entity && + !expiration); + + if (entity) + break; + } + + BUG_ON(!entity && + (!RB_EMPTY_ROOT(&st->active) || !RB_EMPTY_ROOT(&(st+1)->active) || + !RB_EMPTY_ROOT(&(st+2)->active))); + + if (!entity) + return NULL; + + /* Log some information */ + bfqq = bfq_entity_to_bfqq(entity); + if (bfqq) + bfq_log_bfqq(bfqq->bfqd, bfqq, "chosen from st %p %d", + st + class_idx, class_idx); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { + struct bfq_group *bfqg = + container_of(entity, struct bfq_group, entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "chosen from st %p %d", + st + class_idx, class_idx); + } +#endif + + return entity; +} + +static bool next_queue_may_preempt(struct bfq_data *bfqd) +{ + struct bfq_sched_data *sd = &bfqd->root_group->sched_data; + + return sd->next_in_service != sd->in_service_entity; +} + +/* + * Get next queue for service. + */ +static struct bfq_queue *bfq_get_next_queue(struct bfq_data *bfqd) +{ + struct bfq_entity *entity = NULL; + struct bfq_sched_data *sd; + struct bfq_queue *bfqq; + + BUG_ON(bfqd->in_service_queue); + + if (bfqd->busy_queues == 0) + return NULL; + + /* + * Traverse the path from the root to the leaf entity to + * serve. Set in service all the entities visited along the + * way. + */ + sd = &bfqd->root_group->sched_data; + for (; sd ; sd = entity->my_sched_data) { +#ifdef BFQ_GROUP_IOSCHED_ENABLED + if (entity) { + struct bfq_group *bfqg = + container_of(entity, struct bfq_group, entity); + + bfq_log_bfqg(bfqd, bfqg, + "get_next_queue: lookup in this group"); + if (!sd->next_in_service) + pr_crit("get_next_queue: lookup in this group"); + } else { + bfq_log_bfqg(bfqd, bfqd->root_group, + "get_next_queue: lookup in root group"); + if (!sd->next_in_service) + pr_crit("get_next_queue: lookup in root group"); + } +#endif + + BUG_ON(!sd->next_in_service); + + /* + * WARNING. We are about to set the in-service entity + * to sd->next_in_service, i.e., to the (cached) value + * returned by bfq_lookup_next_entity(sd) the last + * time it was invoked, i.e., the last time when the + * service order in sd changed as a consequence of the + * activation or deactivation of an entity. In this + * respect, if we execute bfq_lookup_next_entity(sd) + * in this very moment, it may, although with low + * probability, yield a different entity than that + * pointed to by sd->next_in_service. This rare event + * happens in case there was no CLASS_IDLE entity to + * serve for sd when bfq_lookup_next_entity(sd) was + * invoked for the last time, while there is now one + * such entity. + * + * If the above event happens, then the scheduling of + * such entity in CLASS_IDLE is postponed until the + * service of the sd->next_in_service entity + * finishes. In fact, when the latter is expired, + * bfq_lookup_next_entity(sd) gets called again, + * exactly to update sd->next_in_service. + */ + + /* Make next_in_service entity become in_service_entity */ + entity = sd->next_in_service; + sd->in_service_entity = entity; + + /* + * Reset the accumulator of the amount of service that + * the entity is about to receive. + */ + entity->service = 0; + + /* + * If entity is no longer a candidate for next + * service, then it must be extracted from its active + * tree, so as to make sure that it won't be + * considered when computing next_in_service. See the + * comments on the function + * bfq_no_longer_next_in_service() for details. + */ + if (bfq_no_longer_next_in_service(entity)) + bfq_active_extract(bfq_entity_service_tree(entity), + entity); + + /* + * Even if entity is not to be extracted according to + * the above check, a descendant entity may get + * extracted in one of the next iterations of this + * loop. Such an event could cause a change in + * next_in_service for the level of the descendant + * entity, and thus possibly back to this level. + * + * However, we cannot perform the resulting needed + * update of next_in_service for this level before the + * end of the whole loop, because, to know which is + * the correct next-to-serve candidate entity for each + * level, we need first to find the leaf entity to set + * in service. In fact, only after we know which is + * the next-to-serve leaf entity, we can discover + * whether the parent entity of the leaf entity + * becomes the next-to-serve, and so on. + */ + + /* Log some information */ + bfqq = bfq_entity_to_bfqq(entity); + if (bfqq) + bfq_log_bfqq(bfqd, bfqq, + "get_next_queue: this queue, finish %llu", + (((entity->finish>>10)*1000)>>10)>>2); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { + struct bfq_group *bfqg = + container_of(entity, struct bfq_group, entity); + + bfq_log_bfqg(bfqd, bfqg, + "get_next_queue: this entity, finish %llu", + (((entity->finish>>10)*1000)>>10)>>2); + } +#endif + + } + + BUG_ON(!entity); + bfqq = bfq_entity_to_bfqq(entity); + BUG_ON(!bfqq); + + /* + * We can finally update all next-to-serve entities along the + * path from the leaf entity just set in service to the root. + */ + for_each_entity(entity) { + struct bfq_sched_data *sd = entity->sched_data; + + if (!bfq_update_next_in_service(sd, NULL, false)) + break; + } + + return bfqq; +} + +static void __bfq_bfqd_reset_in_service(struct bfq_data *bfqd) +{ + struct bfq_queue *in_serv_bfqq = bfqd->in_service_queue; + struct bfq_entity *in_serv_entity = &in_serv_bfqq->entity; + struct bfq_entity *entity = in_serv_entity; + +#ifndef BFQ_MQ + if (bfqd->in_service_bic) { + put_io_context(bfqd->in_service_bic->icq.ioc); + bfqd->in_service_bic = NULL; + } +#endif + + bfq_clear_bfqq_wait_request(in_serv_bfqq); + hrtimer_try_to_cancel(&bfqd->idle_slice_timer); + bfqd->in_service_queue = NULL; + + /* + * When this function is called, all in-service entities have + * been properly deactivated or requeued, so we can safely + * execute the final step: reset in_service_entity along the + * path from entity to the root. + */ + for_each_entity(entity) + entity->sched_data->in_service_entity = NULL; + + /* + * in_serv_entity is no longer in service, so, if it is in no + * service tree either, then release the service reference to + * the queue it represents (taken with bfq_get_entity). + */ + if (!in_serv_entity->on_st) + bfq_put_queue(in_serv_bfqq); +} + +static void bfq_deactivate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, + bool ins_into_idle_tree, bool expiration) +{ + struct bfq_entity *entity = &bfqq->entity; + + bfq_deactivate_entity(entity, ins_into_idle_tree, expiration); +} + +static void bfq_activate_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + struct bfq_service_tree *st = bfq_entity_service_tree(entity); + + BUG_ON(bfqq == bfqd->in_service_queue); + BUG_ON(entity->tree != &st->active && entity->tree != &st->idle && + entity->on_st); + + bfq_activate_requeue_entity(entity, bfq_bfqq_non_blocking_wait_rq(bfqq), + false, false); + bfq_clear_bfqq_non_blocking_wait_rq(bfqq); +} + +static void bfq_requeue_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, + bool expiration) +{ + struct bfq_entity *entity = &bfqq->entity; + + bfq_activate_requeue_entity(entity, false, + bfqq == bfqd->in_service_queue, expiration); +} + +static void bfqg_stats_update_dequeue(struct bfq_group *bfqg); + +/* + * Called when the bfqq no longer has requests pending, remove it from + * the service tree. As a special case, it can be invoked during an + * expiration. + */ +static void bfq_del_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq, + bool expiration) +{ + BUG_ON(!bfq_bfqq_busy(bfqq)); + BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list)); + + bfq_log_bfqq(bfqd, bfqq, "del from busy"); + + bfq_clear_bfqq_busy(bfqq); + + BUG_ON(bfqd->busy_queues == 0); + bfqd->busy_queues--; + + if (!bfqq->dispatched) + bfq_weights_tree_remove(bfqd, &bfqq->entity, + &bfqd->queue_weights_tree); + + if (bfqq->wr_coeff > 1) { + bfqd->wr_busy_queues--; + BUG_ON(bfqd->wr_busy_queues < 0); + } + + bfqg_stats_update_dequeue(bfqq_group(bfqq)); + + BUG_ON(bfqq->entity.budget < 0); + + bfq_deactivate_bfqq(bfqd, bfqq, true, expiration); +} + +/* + * Called when an inactive queue receives a new request. + */ +static void bfq_add_bfqq_busy(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + BUG_ON(bfq_bfqq_busy(bfqq)); + BUG_ON(bfqq == bfqd->in_service_queue); + + bfq_log_bfqq(bfqd, bfqq, "add to busy"); + + bfq_activate_bfqq(bfqd, bfqq); + + bfq_mark_bfqq_busy(bfqq); + bfqd->busy_queues++; + + if (!bfqq->dispatched) + if (bfqq->wr_coeff == 1) + bfq_weights_tree_add(bfqd, &bfqq->entity, + &bfqd->queue_weights_tree); + + if (bfqq->wr_coeff > 1) { + bfqd->wr_busy_queues++; + BUG_ON(bfqd->wr_busy_queues > bfqd->busy_queues); + } + +} diff --git b/block/bfq-sq-iosched.c b/block/bfq-sq-iosched.c new file mode 100644 index 0000000..4bbd7f4 --- /dev/null +++ b/block/bfq-sq-iosched.c @@ -0,0 +1,5498 @@ +/* + * Budget Fair Queueing (BFQ) I/O scheduler. + * + * Based on ideas and code from CFQ: + * Copyright (C) 2003 Jens Axboe + * + * Copyright (C) 2008 Fabio Checconi + * Paolo Valente + * + * Copyright (C) 2015 Paolo Valente + * + * Copyright (C) 2017 Paolo Valente + * + * Licensed under the GPL-2 as detailed in the accompanying COPYING.BFQ + * file. + * + * BFQ is a proportional-share I/O scheduler, with some extra + * low-latency capabilities. BFQ also supports full hierarchical + * scheduling through cgroups. Next paragraphs provide an introduction + * on BFQ inner workings. Details on BFQ benefits and usage can be + * found in Documentation/block/bfq-iosched.txt. + * + * BFQ is a proportional-share storage-I/O scheduling algorithm based + * on the slice-by-slice service scheme of CFQ. But BFQ assigns + * budgets, measured in number of sectors, to processes instead of + * time slices. The device is not granted to the in-service process + * for a given time slice, but until it has exhausted its assigned + * budget. This change from the time to the service domain enables BFQ + * to distribute the device throughput among processes as desired, + * without any distortion due to throughput fluctuations, or to device + * internal queueing. BFQ uses an ad hoc internal scheduler, called + * B-WF2Q+, to schedule processes according to their budgets. More + * precisely, BFQ schedules queues associated with processes. Thanks to + * the accurate policy of B-WF2Q+, BFQ can afford to assign high + * budgets to I/O-bound processes issuing sequential requests (to + * boost the throughput), and yet guarantee a low latency to + * interactive and soft real-time applications. + * + * NOTE: if the main or only goal, with a given device, is to achieve + * the maximum-possible throughput at all times, then do switch off + * all low-latency heuristics for that device, by setting low_latency + * to 0. + * + * BFQ is described in [1], where also a reference to the initial, more + * theoretical paper on BFQ can be found. The interested reader can find + * in the latter paper full details on the main algorithm, as well as + * formulas of the guarantees and formal proofs of all the properties. + * With respect to the version of BFQ presented in these papers, this + * implementation adds a few more heuristics, such as the one that + * guarantees a low latency to soft real-time applications, and a + * hierarchical extension based on H-WF2Q+. + * + * B-WF2Q+ is based on WF2Q+, that is described in [2], together with + * H-WF2Q+, while the augmented tree used to implement B-WF2Q+ with O(log N) + * complexity derives from the one introduced with EEVDF in [3]. + * + * [1] P. Valente, A. Avanzini, "Evolution of the BFQ Storage I/O + * Scheduler", Proceedings of the First Workshop on Mobile System + * Technologies (MST-2015), May 2015. + * http://algogroup.unimore.it/people/paolo/disk_sched/mst-2015.pdf + * + * http://algogroup.unimo.it/people/paolo/disk_sched/bf1-v1-suite-results.pdf + * + * [2] Jon C.R. Bennett and H. Zhang, ``Hierarchical Packet Fair Queueing + * Algorithms,'' IEEE/ACM Transactions on Networking, 5(5):675-689, + * Oct 1997. + * + * http://www.cs.cmu.edu/~hzhang/papers/TON-97-Oct.ps.gz + * + * [3] I. Stoica and H. Abdel-Wahab, ``Earliest Eligible Virtual Deadline + * First: A Flexible and Accurate Mechanism for Proportional Share + * Resource Allocation,'' technical report. + * + * http://www.cs.berkeley.edu/~istoica/papers/eevdf-tr-95.pdf + */ +#include +#include +#include +#include +#include +#include +#include +#include +#include "blk.h" +#include "bfq.h" +#include "blk-wbt.h" + +/* Expiration time of sync (0) and async (1) requests, in ns. */ +static const u64 bfq_fifo_expire[2] = { NSEC_PER_SEC / 4, NSEC_PER_SEC / 8 }; + +/* Maximum backwards seek, in KiB. */ +static const int bfq_back_max = (16 * 1024); + +/* Penalty of a backwards seek, in number of sectors. */ +static const int bfq_back_penalty = 2; + +/* Idling period duration, in ns. */ +static u32 bfq_slice_idle = (NSEC_PER_SEC / 125); + +/* Minimum number of assigned budgets for which stats are safe to compute. */ +static const int bfq_stats_min_budgets = 194; + +/* Default maximum budget values, in sectors and number of requests. */ +static const int bfq_default_max_budget = (16 * 1024); + +/* + * Async to sync throughput distribution is controlled as follows: + * when an async request is served, the entity is charged the number + * of sectors of the request, multiplied by the factor below + */ +static const int bfq_async_charge_factor = 10; + +/* Default timeout values, in jiffies, approximating CFQ defaults. */ +static const int bfq_timeout = (HZ / 8); + +static struct kmem_cache *bfq_pool; + +/* Below this threshold (in ns), we consider thinktime immediate. */ +#define BFQ_MIN_TT (2 * NSEC_PER_MSEC) + +/* hw_tag detection: parallel requests threshold and min samples needed. */ +#define BFQ_HW_QUEUE_THRESHOLD 4 +#define BFQ_HW_QUEUE_SAMPLES 32 + +#define BFQQ_SEEK_THR (sector_t)(8 * 100) +#define BFQQ_SECT_THR_NONROT (sector_t)(2 * 32) +#define BFQQ_CLOSE_THR (sector_t)(8 * 1024) +#define BFQQ_SEEKY(bfqq) (hweight32(bfqq->seek_history) > 32/8) + +/* Min number of samples required to perform peak-rate update */ +#define BFQ_RATE_MIN_SAMPLES 32 +/* Min observation time interval required to perform a peak-rate update (ns) */ +#define BFQ_RATE_MIN_INTERVAL (300*NSEC_PER_MSEC) +/* Target observation time interval for a peak-rate update (ns) */ +#define BFQ_RATE_REF_INTERVAL NSEC_PER_SEC + +/* Shift used for peak rate fixed precision calculations. */ +#define BFQ_RATE_SHIFT 16 + +/* + * By default, BFQ computes the duration of the weight raising for + * interactive applications automatically, using the following formula: + * duration = (R / r) * T, where r is the peak rate of the device, and + * R and T are two reference parameters. + * In particular, R is the peak rate of the reference device (see below), + * and T is a reference time: given the systems that are likely to be + * installed on the reference device according to its speed class, T is + * about the maximum time needed, under BFQ and while reading two files in + * parallel, to load typical large applications on these systems. + * In practice, the slower/faster the device at hand is, the more/less it + * takes to load applications with respect to the reference device. + * Accordingly, the longer/shorter BFQ grants weight raising to interactive + * applications. + * + * BFQ uses four different reference pairs (R, T), depending on: + * . whether the device is rotational or non-rotational; + * . whether the device is slow, such as old or portable HDDs, as well as + * SD cards, or fast, such as newer HDDs and SSDs. + * + * The device's speed class is dynamically (re)detected in + * bfq_update_peak_rate() every time the estimated peak rate is updated. + * + * In the following definitions, R_slow[0]/R_fast[0] and + * T_slow[0]/T_fast[0] are the reference values for a slow/fast + * rotational device, whereas R_slow[1]/R_fast[1] and + * T_slow[1]/T_fast[1] are the reference values for a slow/fast + * non-rotational device. Finally, device_speed_thresh are the + * thresholds used to switch between speed classes. The reference + * rates are not the actual peak rates of the devices used as a + * reference, but slightly lower values. The reason for using these + * slightly lower values is that the peak-rate estimator tends to + * yield slightly lower values than the actual peak rate (it can yield + * the actual peak rate only if there is only one process doing I/O, + * and the process does sequential I/O). + * + * Both the reference peak rates and the thresholds are measured in + * sectors/usec, left-shifted by BFQ_RATE_SHIFT. + */ +static int R_slow[2] = {1000, 10700}; +static int R_fast[2] = {14000, 33000}; +/* + * To improve readability, a conversion function is used to initialize the + * following arrays, which entails that they can be initialized only in a + * function. + */ +static int T_slow[2]; +static int T_fast[2]; +static int device_speed_thresh[2]; + +#define BFQ_SERVICE_TREE_INIT ((struct bfq_service_tree) \ + { RB_ROOT, RB_ROOT, NULL, NULL, 0, 0 }) + +#define RQ_BIC(rq) ((struct bfq_io_cq *) (rq)->elv.priv[0]) +#define RQ_BFQQ(rq) ((rq)->elv.priv[1]) + +static void bfq_schedule_dispatch(struct bfq_data *bfqd); + +#include "bfq-ioc.c" +#include "bfq-sched.c" +#include "bfq-cgroup-included.c" + +#define bfq_class_idle(bfqq) ((bfqq)->ioprio_class == IOPRIO_CLASS_IDLE) +#define bfq_class_rt(bfqq) ((bfqq)->ioprio_class == IOPRIO_CLASS_RT) + +#define bfq_sample_valid(samples) ((samples) > 80) + +/* + * Scheduler run of queue, if there are requests pending and no one in the + * driver that will restart queueing. + */ +static void bfq_schedule_dispatch(struct bfq_data *bfqd) +{ + if (bfqd->queued != 0) { + bfq_log(bfqd, "schedule dispatch"); + kblockd_schedule_work(&bfqd->unplug_work); + } +} + +/* + * Lifted from AS - choose which of rq1 and rq2 that is best served now. + * We choose the request that is closesr to the head right now. Distance + * behind the head is penalized and only allowed to a certain extent. + */ +static struct request *bfq_choose_req(struct bfq_data *bfqd, + struct request *rq1, + struct request *rq2, + sector_t last) +{ + sector_t s1, s2, d1 = 0, d2 = 0; + unsigned long back_max; +#define BFQ_RQ1_WRAP 0x01 /* request 1 wraps */ +#define BFQ_RQ2_WRAP 0x02 /* request 2 wraps */ + unsigned int wrap = 0; /* bit mask: requests behind the disk head? */ + + if (!rq1 || rq1 == rq2) + return rq2; + if (!rq2) + return rq1; + + if (rq_is_sync(rq1) && !rq_is_sync(rq2)) + return rq1; + else if (rq_is_sync(rq2) && !rq_is_sync(rq1)) + return rq2; + if ((rq1->cmd_flags & REQ_META) && !(rq2->cmd_flags & REQ_META)) + return rq1; + else if ((rq2->cmd_flags & REQ_META) && !(rq1->cmd_flags & REQ_META)) + return rq2; + + s1 = blk_rq_pos(rq1); + s2 = blk_rq_pos(rq2); + + /* + * By definition, 1KiB is 2 sectors. + */ + back_max = bfqd->bfq_back_max * 2; + + /* + * Strict one way elevator _except_ in the case where we allow + * short backward seeks which are biased as twice the cost of a + * similar forward seek. + */ + if (s1 >= last) + d1 = s1 - last; + else if (s1 + back_max >= last) + d1 = (last - s1) * bfqd->bfq_back_penalty; + else + wrap |= BFQ_RQ1_WRAP; + + if (s2 >= last) + d2 = s2 - last; + else if (s2 + back_max >= last) + d2 = (last - s2) * bfqd->bfq_back_penalty; + else + wrap |= BFQ_RQ2_WRAP; + + /* Found required data */ + + /* + * By doing switch() on the bit mask "wrap" we avoid having to + * check two variables for all permutations: --> faster! + */ + switch (wrap) { + case 0: /* common case for CFQ: rq1 and rq2 not wrapped */ + if (d1 < d2) + return rq1; + else if (d2 < d1) + return rq2; + + if (s1 >= s2) + return rq1; + else + return rq2; + + case BFQ_RQ2_WRAP: + return rq1; + case BFQ_RQ1_WRAP: + return rq2; + case (BFQ_RQ1_WRAP|BFQ_RQ2_WRAP): /* both rqs wrapped */ + default: + /* + * Since both rqs are wrapped, + * start with the one that's further behind head + * (--> only *one* back seek required), + * since back seek takes more time than forward. + */ + if (s1 <= s2) + return rq1; + else + return rq2; + } +} + +static struct bfq_queue * +bfq_rq_pos_tree_lookup(struct bfq_data *bfqd, struct rb_root *root, + sector_t sector, struct rb_node **ret_parent, + struct rb_node ***rb_link) +{ + struct rb_node **p, *parent; + struct bfq_queue *bfqq = NULL; + + parent = NULL; + p = &root->rb_node; + while (*p) { + struct rb_node **n; + + parent = *p; + bfqq = rb_entry(parent, struct bfq_queue, pos_node); + + /* + * Sort strictly based on sector. Smallest to the left, + * largest to the right. + */ + if (sector > blk_rq_pos(bfqq->next_rq)) + n = &(*p)->rb_right; + else if (sector < blk_rq_pos(bfqq->next_rq)) + n = &(*p)->rb_left; + else + break; + p = n; + bfqq = NULL; + } + + *ret_parent = parent; + if (rb_link) + *rb_link = p; + + bfq_log(bfqd, "rq_pos_tree_lookup %llu: returning %d", + (unsigned long long) sector, + bfqq ? bfqq->pid : 0); + + return bfqq; +} + +static void bfq_pos_tree_add_move(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + struct rb_node **p, *parent; + struct bfq_queue *__bfqq; + + if (bfqq->pos_root) { + rb_erase(&bfqq->pos_node, bfqq->pos_root); + bfqq->pos_root = NULL; + } + + if (bfq_class_idle(bfqq)) + return; + if (!bfqq->next_rq) + return; + + bfqq->pos_root = &bfq_bfqq_to_bfqg(bfqq)->rq_pos_tree; + __bfqq = bfq_rq_pos_tree_lookup(bfqd, bfqq->pos_root, + blk_rq_pos(bfqq->next_rq), &parent, &p); + if (!__bfqq) { + rb_link_node(&bfqq->pos_node, parent, p); + rb_insert_color(&bfqq->pos_node, bfqq->pos_root); + } else + bfqq->pos_root = NULL; +} + +/* + * Tell whether there are active queues or groups with differentiated weights. + */ +static bool bfq_differentiated_weights(struct bfq_data *bfqd) +{ + /* + * For weights to differ, at least one of the trees must contain + * at least two nodes. + */ + return (!RB_EMPTY_ROOT(&bfqd->queue_weights_tree) && + (bfqd->queue_weights_tree.rb_node->rb_left || + bfqd->queue_weights_tree.rb_node->rb_right) +#ifdef BFQ_GROUP_IOSCHED_ENABLED + ) || + (!RB_EMPTY_ROOT(&bfqd->group_weights_tree) && + (bfqd->group_weights_tree.rb_node->rb_left || + bfqd->group_weights_tree.rb_node->rb_right) +#endif + ); +} + +/* + * The following function returns true if every queue must receive the + * same share of the throughput (this condition is used when deciding + * whether idling may be disabled, see the comments in the function + * bfq_bfqq_may_idle()). + * + * Such a scenario occurs when: + * 1) all active queues have the same weight, + * 2) all active groups at the same level in the groups tree have the same + * weight, + * 3) all active groups at the same level in the groups tree have the same + * number of children. + * + * Unfortunately, keeping the necessary state for evaluating exactly the + * above symmetry conditions would be quite complex and time-consuming. + * Therefore this function evaluates, instead, the following stronger + * sub-conditions, for which it is much easier to maintain the needed + * state: + * 1) all active queues have the same weight, + * 2) all active groups have the same weight, + * 3) all active groups have at most one active child each. + * In particular, the last two conditions are always true if hierarchical + * support and the cgroups interface are not enabled, thus no state needs + * to be maintained in this case. + */ +static bool bfq_symmetric_scenario(struct bfq_data *bfqd) +{ + return !bfq_differentiated_weights(bfqd); +} + +/* + * If the weight-counter tree passed as input contains no counter for + * the weight of the input entity, then add that counter; otherwise just + * increment the existing counter. + * + * Note that weight-counter trees contain few nodes in mostly symmetric + * scenarios. For example, if all queues have the same weight, then the + * weight-counter tree for the queues may contain at most one node. + * This holds even if low_latency is on, because weight-raised queues + * are not inserted in the tree. + * In most scenarios, the rate at which nodes are created/destroyed + * should be low too. + */ +static void bfq_weights_tree_add(struct bfq_data *bfqd, + struct bfq_entity *entity, + struct rb_root *root) +{ + struct rb_node **new = &(root->rb_node), *parent = NULL; + + /* + * Do not insert if the entity is already associated with a + * counter, which happens if: + * 1) the entity is associated with a queue, + * 2) a request arrival has caused the queue to become both + * non-weight-raised, and hence change its weight, and + * backlogged; in this respect, each of the two events + * causes an invocation of this function, + * 3) this is the invocation of this function caused by the + * second event. This second invocation is actually useless, + * and we handle this fact by exiting immediately. More + * efficient or clearer solutions might possibly be adopted. + */ + if (entity->weight_counter) + return; + + while (*new) { + struct bfq_weight_counter *__counter = container_of(*new, + struct bfq_weight_counter, + weights_node); + parent = *new; + + if (entity->weight == __counter->weight) { + entity->weight_counter = __counter; + goto inc_counter; + } + if (entity->weight < __counter->weight) + new = &((*new)->rb_left); + else + new = &((*new)->rb_right); + } + + entity->weight_counter = kzalloc(sizeof(struct bfq_weight_counter), + GFP_ATOMIC); + + /* + * In the unlucky event of an allocation failure, we just + * exit. This will cause the weight of entity to not be + * considered in bfq_differentiated_weights, which, in its + * turn, causes the scenario to be deemed wrongly symmetric in + * case entity's weight would have been the only weight making + * the scenario asymmetric. On the bright side, no unbalance + * will however occur when entity becomes inactive again (the + * invocation of this function is triggered by an activation + * of entity). In fact, bfq_weights_tree_remove does nothing + * if !entity->weight_counter. + */ + if (unlikely(!entity->weight_counter)) + return; + + entity->weight_counter->weight = entity->weight; + rb_link_node(&entity->weight_counter->weights_node, parent, new); + rb_insert_color(&entity->weight_counter->weights_node, root); + +inc_counter: + entity->weight_counter->num_active++; +} + +/* + * Decrement the weight counter associated with the entity, and, if the + * counter reaches 0, remove the counter from the tree. + * See the comments to the function bfq_weights_tree_add() for considerations + * about overhead. + */ +static void bfq_weights_tree_remove(struct bfq_data *bfqd, + struct bfq_entity *entity, + struct rb_root *root) +{ + if (!entity->weight_counter) + return; + + BUG_ON(RB_EMPTY_ROOT(root)); + BUG_ON(entity->weight_counter->weight != entity->weight); + + BUG_ON(!entity->weight_counter->num_active); + entity->weight_counter->num_active--; + if (entity->weight_counter->num_active > 0) + goto reset_entity_pointer; + + rb_erase(&entity->weight_counter->weights_node, root); + kfree(entity->weight_counter); + +reset_entity_pointer: + entity->weight_counter = NULL; +} + +/* + * Return expired entry, or NULL to just start from scratch in rbtree. + */ +static struct request *bfq_check_fifo(struct bfq_queue *bfqq, + struct request *last) +{ + struct request *rq; + + if (bfq_bfqq_fifo_expire(bfqq)) + return NULL; + + bfq_mark_bfqq_fifo_expire(bfqq); + + rq = rq_entry_fifo(bfqq->fifo.next); + + if (rq == last || ktime_get_ns() < rq->fifo_time) + return NULL; + + bfq_log_bfqq(bfqq->bfqd, bfqq, "check_fifo: returned %p", rq); + BUG_ON(RB_EMPTY_NODE(&rq->rb_node)); + return rq; +} + +static struct request *bfq_find_next_rq(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + struct request *last) +{ + struct rb_node *rbnext = rb_next(&last->rb_node); + struct rb_node *rbprev = rb_prev(&last->rb_node); + struct request *next, *prev = NULL; + + BUG_ON(list_empty(&bfqq->fifo)); + + /* Follow expired path, else get first next available. */ + next = bfq_check_fifo(bfqq, last); + if (next) { + BUG_ON(next == last); + return next; + } + + BUG_ON(RB_EMPTY_NODE(&last->rb_node)); + + if (rbprev) + prev = rb_entry_rq(rbprev); + + if (rbnext) + next = rb_entry_rq(rbnext); + else { + rbnext = rb_first(&bfqq->sort_list); + if (rbnext && rbnext != &last->rb_node) + next = rb_entry_rq(rbnext); + } + + return bfq_choose_req(bfqd, next, prev, blk_rq_pos(last)); +} + +/* see the definition of bfq_async_charge_factor for details */ +static unsigned long bfq_serv_to_charge(struct request *rq, + struct bfq_queue *bfqq) +{ + if (bfq_bfqq_sync(bfqq) || bfqq->wr_coeff > 1) + return blk_rq_sectors(rq); + + /* + * If there are no weight-raised queues, then amplify service + * by just the async charge factor; otherwise amplify service + * by twice the async charge factor, to further reduce latency + * for weight-raised queues. + */ + if (bfqq->bfqd->wr_busy_queues == 0) + return blk_rq_sectors(rq) * bfq_async_charge_factor; + + return blk_rq_sectors(rq) * 2 * bfq_async_charge_factor; +} + +/** + * bfq_updated_next_req - update the queue after a new next_rq selection. + * @bfqd: the device data the queue belongs to. + * @bfqq: the queue to update. + * + * If the first request of a queue changes we make sure that the queue + * has enough budget to serve at least its first request (if the + * request has grown). We do this because if the queue has not enough + * budget for its first request, it has to go through two dispatch + * rounds to actually get it dispatched. + */ +static void bfq_updated_next_req(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + struct bfq_service_tree *st = bfq_entity_service_tree(entity); + struct request *next_rq = bfqq->next_rq; + unsigned long new_budget; + + if (!next_rq) + return; + + if (bfqq == bfqd->in_service_queue) + /* + * In order not to break guarantees, budgets cannot be + * changed after an entity has been selected. + */ + return; + + BUG_ON(entity->tree != &st->active); + BUG_ON(entity == entity->sched_data->in_service_entity); + + new_budget = max_t(unsigned long, bfqq->max_budget, + bfq_serv_to_charge(next_rq, bfqq)); + if (entity->budget != new_budget) { + entity->budget = new_budget; + bfq_log_bfqq(bfqd, bfqq, "updated next rq: new budget %lu", + new_budget); + bfq_requeue_bfqq(bfqd, bfqq, false); + } +} + +static unsigned int bfq_wr_duration(struct bfq_data *bfqd) +{ + u64 dur; + + if (bfqd->bfq_wr_max_time > 0) + return bfqd->bfq_wr_max_time; + + dur = bfqd->RT_prod; + do_div(dur, bfqd->peak_rate); + + /* + * Limit duration between 3 and 13 seconds. Tests show that + * higher values than 13 seconds often yield the opposite of + * the desired result, i.e., worsen responsiveness by letting + * non-interactive and non-soft-real-time applications + * preserve weight raising for a too long time interval. + * + * On the other end, lower values than 3 seconds make it + * difficult for most interactive tasks to complete their jobs + * before weight-raising finishes. + */ + if (dur > msecs_to_jiffies(13000)) + dur = msecs_to_jiffies(13000); + else if (dur < msecs_to_jiffies(3000)) + dur = msecs_to_jiffies(3000); + + return dur; +} + +/* switch back from soft real-time to interactive weight raising */ +static void switch_back_to_interactive_wr(struct bfq_queue *bfqq, + struct bfq_data *bfqd) +{ + bfqq->wr_coeff = bfqd->bfq_wr_coeff; + bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); + bfqq->last_wr_start_finish = bfqq->wr_start_at_switch_to_srt; +} + +static void +bfq_bfqq_resume_state(struct bfq_queue *bfqq, struct bfq_data *bfqd, + struct bfq_io_cq *bic, bool bfq_already_existing) +{ + unsigned int old_wr_coeff; + bool busy = bfq_already_existing && bfq_bfqq_busy(bfqq); + + if (bic->saved_has_short_ttime) + bfq_mark_bfqq_has_short_ttime(bfqq); + else + bfq_clear_bfqq_has_short_ttime(bfqq); + + if (bic->saved_IO_bound) + bfq_mark_bfqq_IO_bound(bfqq); + else + bfq_clear_bfqq_IO_bound(bfqq); + + if (unlikely(busy)) + old_wr_coeff = bfqq->wr_coeff; + + bfqq->wr_coeff = bic->saved_wr_coeff; + bfqq->wr_start_at_switch_to_srt = bic->saved_wr_start_at_switch_to_srt; + BUG_ON(time_is_after_jiffies(bfqq->wr_start_at_switch_to_srt)); + bfqq->last_wr_start_finish = bic->saved_last_wr_start_finish; + bfqq->wr_cur_max_time = bic->saved_wr_cur_max_time; + BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish)); + + bfq_log_bfqq(bfqq->bfqd, bfqq, + "[%s] bic %p wr_coeff %d start_finish %lu max_time %lu", + __func__, + bic, bfqq->wr_coeff, bfqq->last_wr_start_finish, + bfqq->wr_cur_max_time); + + if (bfqq->wr_coeff > 1 && (bfq_bfqq_in_large_burst(bfqq) || + time_is_before_jiffies(bfqq->last_wr_start_finish + + bfqq->wr_cur_max_time))) { + if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time && + !bfq_bfqq_in_large_burst(bfqq) && + time_is_after_eq_jiffies(bfqq->wr_start_at_switch_to_srt + + bfq_wr_duration(bfqd))) { + switch_back_to_interactive_wr(bfqq, bfqd); + bfq_log_bfqq(bfqq->bfqd, bfqq, + "resume state: switching back to interactive"); + } else { + bfqq->wr_coeff = 1; + bfq_log_bfqq(bfqq->bfqd, bfqq, + "resume state: switching off wr (%lu + %lu < %lu)", + bfqq->last_wr_start_finish, bfqq->wr_cur_max_time, + jiffies); + } + } + + /* make sure weight will be updated, however we got here */ + bfqq->entity.prio_changed = 1; + + if (likely(!busy)) + return; + + if (old_wr_coeff == 1 && bfqq->wr_coeff > 1) { + bfqd->wr_busy_queues++; + BUG_ON(bfqd->wr_busy_queues > bfqd->busy_queues); + } else if (old_wr_coeff > 1 && bfqq->wr_coeff == 1) { + bfqd->wr_busy_queues--; + BUG_ON(bfqd->wr_busy_queues < 0); + } +} + +static int bfqq_process_refs(struct bfq_queue *bfqq) +{ + int process_refs, io_refs; + + lockdep_assert_held(bfqq->bfqd->queue->queue_lock); + + io_refs = bfqq->allocated[READ] + bfqq->allocated[WRITE]; + process_refs = bfqq->ref - io_refs - bfqq->entity.on_st; + BUG_ON(process_refs < 0); + return process_refs; +} + +/* Empty burst list and add just bfqq (see comments to bfq_handle_burst) */ +static void bfq_reset_burst_list(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + struct bfq_queue *item; + struct hlist_node *n; + + hlist_for_each_entry_safe(item, n, &bfqd->burst_list, burst_list_node) + hlist_del_init(&item->burst_list_node); + hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list); + bfqd->burst_size = 1; + bfqd->burst_parent_entity = bfqq->entity.parent; +} + +/* Add bfqq to the list of queues in current burst (see bfq_handle_burst) */ +static void bfq_add_to_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + /* Increment burst size to take into account also bfqq */ + bfqd->burst_size++; + + bfq_log_bfqq(bfqd, bfqq, "add_to_burst %d", bfqd->burst_size); + + BUG_ON(bfqd->burst_size > bfqd->bfq_large_burst_thresh); + + if (bfqd->burst_size == bfqd->bfq_large_burst_thresh) { + struct bfq_queue *pos, *bfqq_item; + struct hlist_node *n; + + /* + * Enough queues have been activated shortly after each + * other to consider this burst as large. + */ + bfqd->large_burst = true; + bfq_log_bfqq(bfqd, bfqq, "add_to_burst: large burst started"); + + /* + * We can now mark all queues in the burst list as + * belonging to a large burst. + */ + hlist_for_each_entry(bfqq_item, &bfqd->burst_list, + burst_list_node) { + bfq_mark_bfqq_in_large_burst(bfqq_item); + bfq_log_bfqq(bfqd, bfqq_item, "marked in large burst"); + } + bfq_mark_bfqq_in_large_burst(bfqq); + bfq_log_bfqq(bfqd, bfqq, "marked in large burst"); + + /* + * From now on, and until the current burst finishes, any + * new queue being activated shortly after the last queue + * was inserted in the burst can be immediately marked as + * belonging to a large burst. So the burst list is not + * needed any more. Remove it. + */ + hlist_for_each_entry_safe(pos, n, &bfqd->burst_list, + burst_list_node) + hlist_del_init(&pos->burst_list_node); + } else /* + * Burst not yet large: add bfqq to the burst list. Do + * not increment the ref counter for bfqq, because bfqq + * is removed from the burst list before freeing bfqq + * in put_queue. + */ + hlist_add_head(&bfqq->burst_list_node, &bfqd->burst_list); +} + +/* + * If many queues belonging to the same group happen to be created + * shortly after each other, then the processes associated with these + * queues have typically a common goal. In particular, bursts of queue + * creations are usually caused by services or applications that spawn + * many parallel threads/processes. Examples are systemd during boot, + * or git grep. To help these processes get their job done as soon as + * possible, it is usually better to not grant either weight-raising + * or device idling to their queues. + * + * In this comment we describe, firstly, the reasons why this fact + * holds, and, secondly, the next function, which implements the main + * steps needed to properly mark these queues so that they can then be + * treated in a different way. + * + * The above services or applications benefit mostly from a high + * throughput: the quicker the requests of the activated queues are + * cumulatively served, the sooner the target job of these queues gets + * completed. As a consequence, weight-raising any of these queues, + * which also implies idling the device for it, is almost always + * counterproductive. In most cases it just lowers throughput. + * + * On the other hand, a burst of queue creations may be caused also by + * the start of an application that does not consist of a lot of + * parallel I/O-bound threads. In fact, with a complex application, + * several short processes may need to be executed to start-up the + * application. In this respect, to start an application as quickly as + * possible, the best thing to do is in any case to privilege the I/O + * related to the application with respect to all other + * I/O. Therefore, the best strategy to start as quickly as possible + * an application that causes a burst of queue creations is to + * weight-raise all the queues created during the burst. This is the + * exact opposite of the best strategy for the other type of bursts. + * + * In the end, to take the best action for each of the two cases, the + * two types of bursts need to be distinguished. Fortunately, this + * seems relatively easy, by looking at the sizes of the bursts. In + * particular, we found a threshold such that only bursts with a + * larger size than that threshold are apparently caused by + * services or commands such as systemd or git grep. For brevity, + * hereafter we call just 'large' these bursts. BFQ *does not* + * weight-raise queues whose creation occurs in a large burst. In + * addition, for each of these queues BFQ performs or does not perform + * idling depending on which choice boosts the throughput more. The + * exact choice depends on the device and request pattern at + * hand. + * + * Unfortunately, false positives may occur while an interactive task + * is starting (e.g., an application is being started). The + * consequence is that the queues associated with the task do not + * enjoy weight raising as expected. Fortunately these false positives + * are very rare. They typically occur if some service happens to + * start doing I/O exactly when the interactive task starts. + * + * Turning back to the next function, it implements all the steps + * needed to detect the occurrence of a large burst and to properly + * mark all the queues belonging to it (so that they can then be + * treated in a different way). This goal is achieved by maintaining a + * "burst list" that holds, temporarily, the queues that belong to the + * burst in progress. The list is then used to mark these queues as + * belonging to a large burst if the burst does become large. The main + * steps are the following. + * + * . when the very first queue is created, the queue is inserted into the + * list (as it could be the first queue in a possible burst) + * + * . if the current burst has not yet become large, and a queue Q that does + * not yet belong to the burst is activated shortly after the last time + * at which a new queue entered the burst list, then the function appends + * Q to the burst list + * + * . if, as a consequence of the previous step, the burst size reaches + * the large-burst threshold, then + * + * . all the queues in the burst list are marked as belonging to a + * large burst + * + * . the burst list is deleted; in fact, the burst list already served + * its purpose (keeping temporarily track of the queues in a burst, + * so as to be able to mark them as belonging to a large burst in the + * previous sub-step), and now is not needed any more + * + * . the device enters a large-burst mode + * + * . if a queue Q that does not belong to the burst is created while + * the device is in large-burst mode and shortly after the last time + * at which a queue either entered the burst list or was marked as + * belonging to the current large burst, then Q is immediately marked + * as belonging to a large burst. + * + * . if a queue Q that does not belong to the burst is created a while + * later, i.e., not shortly after, than the last time at which a queue + * either entered the burst list or was marked as belonging to the + * current large burst, then the current burst is deemed as finished and: + * + * . the large-burst mode is reset if set + * + * . the burst list is emptied + * + * . Q is inserted in the burst list, as Q may be the first queue + * in a possible new burst (then the burst list contains just Q + * after this step). + */ +static void bfq_handle_burst(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + /* + * If bfqq is already in the burst list or is part of a large + * burst, or finally has just been split, then there is + * nothing else to do. + */ + if (!hlist_unhashed(&bfqq->burst_list_node) || + bfq_bfqq_in_large_burst(bfqq) || + time_is_after_eq_jiffies(bfqq->split_time + + msecs_to_jiffies(10))) + return; + + /* + * If bfqq's creation happens late enough, or bfqq belongs to + * a different group than the burst group, then the current + * burst is finished, and related data structures must be + * reset. + * + * In this respect, consider the special case where bfqq is + * the very first queue created after BFQ is selected for this + * device. In this case, last_ins_in_burst and + * burst_parent_entity are not yet significant when we get + * here. But it is easy to verify that, whether or not the + * following condition is true, bfqq will end up being + * inserted into the burst list. In particular the list will + * happen to contain only bfqq. And this is exactly what has + * to happen, as bfqq may be the first queue of the first + * burst. + */ + if (time_is_before_jiffies(bfqd->last_ins_in_burst + + bfqd->bfq_burst_interval) || + bfqq->entity.parent != bfqd->burst_parent_entity) { + bfqd->large_burst = false; + bfq_reset_burst_list(bfqd, bfqq); + bfq_log_bfqq(bfqd, bfqq, + "handle_burst: late activation or different group"); + goto end; + } + + /* + * If we get here, then bfqq is being activated shortly after the + * last queue. So, if the current burst is also large, we can mark + * bfqq as belonging to this large burst immediately. + */ + if (bfqd->large_burst) { + bfq_log_bfqq(bfqd, bfqq, "handle_burst: marked in burst"); + bfq_mark_bfqq_in_large_burst(bfqq); + goto end; + } + + /* + * If we get here, then a large-burst state has not yet been + * reached, but bfqq is being activated shortly after the last + * queue. Then we add bfqq to the burst. + */ + bfq_add_to_burst(bfqd, bfqq); +end: + /* + * At this point, bfqq either has been added to the current + * burst or has caused the current burst to terminate and a + * possible new burst to start. In particular, in the second + * case, bfqq has become the first queue in the possible new + * burst. In both cases last_ins_in_burst needs to be moved + * forward. + */ + bfqd->last_ins_in_burst = jiffies; + +} + +static int bfq_bfqq_budget_left(struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + + return entity->budget - entity->service; +} + +/* + * If enough samples have been computed, return the current max budget + * stored in bfqd, which is dynamically updated according to the + * estimated disk peak rate; otherwise return the default max budget + */ +static int bfq_max_budget(struct bfq_data *bfqd) +{ + if (bfqd->budgets_assigned < bfq_stats_min_budgets) + return bfq_default_max_budget; + else + return bfqd->bfq_max_budget; +} + +/* + * Return min budget, which is a fraction of the current or default + * max budget (trying with 1/32) + */ +static int bfq_min_budget(struct bfq_data *bfqd) +{ + if (bfqd->budgets_assigned < bfq_stats_min_budgets) + return bfq_default_max_budget / 32; + else + return bfqd->bfq_max_budget / 32; +} + +static void bfq_bfqq_expire(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + bool compensate, + enum bfqq_expiration reason); + +/* + * The next function, invoked after the input queue bfqq switches from + * idle to busy, updates the budget of bfqq. The function also tells + * whether the in-service queue should be expired, by returning + * true. The purpose of expiring the in-service queue is to give bfqq + * the chance to possibly preempt the in-service queue, and the reason + * for preempting the in-service queue is to achieve one of the two + * goals below. + * + * 1. Guarantee to bfqq its reserved bandwidth even if bfqq has + * expired because it has remained idle. In particular, bfqq may have + * expired for one of the following two reasons: + * + * - BFQ_BFQQ_NO_MORE_REQUEST bfqq did not enjoy any device idling and + * did not make it to issue a new request before its last request + * was served; + * + * - BFQ_BFQQ_TOO_IDLE bfqq did enjoy device idling, but did not issue + * a new request before the expiration of the idling-time. + * + * Even if bfqq has expired for one of the above reasons, the process + * associated with the queue may be however issuing requests greedily, + * and thus be sensitive to the bandwidth it receives (bfqq may have + * remained idle for other reasons: CPU high load, bfqq not enjoying + * idling, I/O throttling somewhere in the path from the process to + * the I/O scheduler, ...). But if, after every expiration for one of + * the above two reasons, bfqq has to wait for the service of at least + * one full budget of another queue before being served again, then + * bfqq is likely to get a much lower bandwidth or resource time than + * its reserved ones. To address this issue, two countermeasures need + * to be taken. + * + * First, the budget and the timestamps of bfqq need to be updated in + * a special way on bfqq reactivation: they need to be updated as if + * bfqq did not remain idle and did not expire. In fact, if they are + * computed as if bfqq expired and remained idle until reactivation, + * then the process associated with bfqq is treated as if, instead of + * being greedy, it stopped issuing requests when bfqq remained idle, + * and restarts issuing requests only on this reactivation. In other + * words, the scheduler does not help the process recover the "service + * hole" between bfqq expiration and reactivation. As a consequence, + * the process receives a lower bandwidth than its reserved one. In + * contrast, to recover this hole, the budget must be updated as if + * bfqq was not expired at all before this reactivation, i.e., it must + * be set to the value of the remaining budget when bfqq was + * expired. Along the same line, timestamps need to be assigned the + * value they had the last time bfqq was selected for service, i.e., + * before last expiration. Thus timestamps need to be back-shifted + * with respect to their normal computation (see [1] for more details + * on this tricky aspect). + * + * Secondly, to allow the process to recover the hole, the in-service + * queue must be expired too, to give bfqq the chance to preempt it + * immediately. In fact, if bfqq has to wait for a full budget of the + * in-service queue to be completed, then it may become impossible to + * let the process recover the hole, even if the back-shifted + * timestamps of bfqq are lower than those of the in-service queue. If + * this happens for most or all of the holes, then the process may not + * receive its reserved bandwidth. In this respect, it is worth noting + * that, being the service of outstanding requests unpreemptible, a + * little fraction of the holes may however be unrecoverable, thereby + * causing a little loss of bandwidth. + * + * The last important point is detecting whether bfqq does need this + * bandwidth recovery. In this respect, the next function deems the + * process associated with bfqq greedy, and thus allows it to recover + * the hole, if: 1) the process is waiting for the arrival of a new + * request (which implies that bfqq expired for one of the above two + * reasons), and 2) such a request has arrived soon. The first + * condition is controlled through the flag non_blocking_wait_rq, + * while the second through the flag arrived_in_time. If both + * conditions hold, then the function computes the budget in the + * above-described special way, and signals that the in-service queue + * should be expired. Timestamp back-shifting is done later in + * __bfq_activate_entity. + * + * 2. Reduce latency. Even if timestamps are not backshifted to let + * the process associated with bfqq recover a service hole, bfqq may + * however happen to have, after being (re)activated, a lower finish + * timestamp than the in-service queue. That is, the next budget of + * bfqq may have to be completed before the one of the in-service + * queue. If this is the case, then preempting the in-service queue + * allows this goal to be achieved, apart from the unpreemptible, + * outstanding requests mentioned above. + * + * Unfortunately, regardless of which of the above two goals one wants + * to achieve, service trees need first to be updated to know whether + * the in-service queue must be preempted. To have service trees + * correctly updated, the in-service queue must be expired and + * rescheduled, and bfqq must be scheduled too. This is one of the + * most costly operations (in future versions, the scheduling + * mechanism may be re-designed in such a way to make it possible to + * know whether preemption is needed without needing to update service + * trees). In addition, queue preemptions almost always cause random + * I/O, and thus loss of throughput. Because of these facts, the next + * function adopts the following simple scheme to avoid both costly + * operations and too frequent preemptions: it requests the expiration + * of the in-service queue (unconditionally) only for queues that need + * to recover a hole, or that either are weight-raised or deserve to + * be weight-raised. + */ +static bool bfq_bfqq_update_budg_for_activation(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + bool arrived_in_time, + bool wr_or_deserves_wr) +{ + struct bfq_entity *entity = &bfqq->entity; + + if (bfq_bfqq_non_blocking_wait_rq(bfqq) && arrived_in_time) { + /* + * We do not clear the flag non_blocking_wait_rq here, as + * the latter is used in bfq_activate_bfqq to signal + * that timestamps need to be back-shifted (and is + * cleared right after). + */ + + /* + * In next assignment we rely on that either + * entity->service or entity->budget are not updated + * on expiration if bfqq is empty (see + * __bfq_bfqq_recalc_budget). Thus both quantities + * remain unchanged after such an expiration, and the + * following statement therefore assigns to + * entity->budget the remaining budget on such an + * expiration. For clarity, entity->service is not + * updated on expiration in any case, and, in normal + * operation, is reset only when bfqq is selected for + * service (see bfq_get_next_queue). + */ + BUG_ON(bfqq->max_budget < 0); + entity->budget = min_t(unsigned long, + bfq_bfqq_budget_left(bfqq), + bfqq->max_budget); + + BUG_ON(entity->budget < 0); + return true; + } + + BUG_ON(bfqq->max_budget < 0); + entity->budget = max_t(unsigned long, bfqq->max_budget, + bfq_serv_to_charge(bfqq->next_rq, bfqq)); + BUG_ON(entity->budget < 0); + + bfq_clear_bfqq_non_blocking_wait_rq(bfqq); + return wr_or_deserves_wr; +} + +/* + * Return the farthest future time instant according to jiffies + * macros. + */ +static unsigned long bfq_greatest_from_now(void) +{ + return jiffies + MAX_JIFFY_OFFSET; +} + +/* + * Return the farthest past time instant according to jiffies + * macros. + */ +static unsigned long bfq_smallest_from_now(void) +{ + return jiffies - MAX_JIFFY_OFFSET; +} + +static void bfq_update_bfqq_wr_on_rq_arrival(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + unsigned int old_wr_coeff, + bool wr_or_deserves_wr, + bool interactive, + bool in_burst, + bool soft_rt) +{ + if (old_wr_coeff == 1 && wr_or_deserves_wr) { + /* start a weight-raising period */ + if (interactive) { + bfqq->wr_coeff = bfqd->bfq_wr_coeff; + bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); + } else { + /* + * No interactive weight raising in progress + * here: assign minus infinity to + * wr_start_at_switch_to_srt, to make sure + * that, at the end of the soft-real-time + * weight raising periods that is starting + * now, no interactive weight-raising period + * may be wrongly considered as still in + * progress (and thus actually started by + * mistake). + */ + bfqq->wr_start_at_switch_to_srt = + bfq_smallest_from_now(); + bfqq->wr_coeff = bfqd->bfq_wr_coeff * + BFQ_SOFTRT_WEIGHT_FACTOR; + bfqq->wr_cur_max_time = + bfqd->bfq_wr_rt_max_time; + } + /* + * If needed, further reduce budget to make sure it is + * close to bfqq's backlog, so as to reduce the + * scheduling-error component due to a too large + * budget. Do not care about throughput consequences, + * but only about latency. Finally, do not assign a + * too small budget either, to avoid increasing + * latency by causing too frequent expirations. + */ + bfqq->entity.budget = min_t(unsigned long, + bfqq->entity.budget, + 2 * bfq_min_budget(bfqd)); + + bfq_log_bfqq(bfqd, bfqq, + "wrais starting at %lu, rais_max_time %u", + jiffies, + jiffies_to_msecs(bfqq->wr_cur_max_time)); + } else if (old_wr_coeff > 1) { + if (interactive) { /* update wr coeff and duration */ + bfqq->wr_coeff = bfqd->bfq_wr_coeff; + bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); + } else if (in_burst) { + bfqq->wr_coeff = 1; + bfq_log_bfqq(bfqd, bfqq, + "wrais ending at %lu, rais_max_time %u", + jiffies, + jiffies_to_msecs(bfqq-> + wr_cur_max_time)); + } else if (soft_rt) { + /* + * The application is now or still meeting the + * requirements for being deemed soft rt. We + * can then correctly and safely (re)charge + * the weight-raising duration for the + * application with the weight-raising + * duration for soft rt applications. + * + * In particular, doing this recharge now, i.e., + * before the weight-raising period for the + * application finishes, reduces the probability + * of the following negative scenario: + * 1) the weight of a soft rt application is + * raised at startup (as for any newly + * created application), + * 2) since the application is not interactive, + * at a certain time weight-raising is + * stopped for the application, + * 3) at that time the application happens to + * still have pending requests, and hence + * is destined to not have a chance to be + * deemed soft rt before these requests are + * completed (see the comments to the + * function bfq_bfqq_softrt_next_start() + * for details on soft rt detection), + * 4) these pending requests experience a high + * latency because the application is not + * weight-raised while they are pending. + */ + if (bfqq->wr_cur_max_time != + bfqd->bfq_wr_rt_max_time) { + bfqq->wr_start_at_switch_to_srt = + bfqq->last_wr_start_finish; + BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish)); + + bfqq->wr_cur_max_time = + bfqd->bfq_wr_rt_max_time; + bfqq->wr_coeff = bfqd->bfq_wr_coeff * + BFQ_SOFTRT_WEIGHT_FACTOR; + bfq_log_bfqq(bfqd, bfqq, + "switching to soft_rt wr"); + } else + bfq_log_bfqq(bfqd, bfqq, + "moving forward soft_rt wr duration"); + bfqq->last_wr_start_finish = jiffies; + } + } +} + +static bool bfq_bfqq_idle_for_long_time(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + return bfqq->dispatched == 0 && + time_is_before_jiffies( + bfqq->budget_timeout + + bfqd->bfq_wr_min_idle_time); +} + +static void bfq_bfqq_handle_idle_busy_switch(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + int old_wr_coeff, + struct request *rq, + bool *interactive) +{ + bool soft_rt, in_burst, wr_or_deserves_wr, + bfqq_wants_to_preempt, + idle_for_long_time = bfq_bfqq_idle_for_long_time(bfqd, bfqq), + /* + * See the comments on + * bfq_bfqq_update_budg_for_activation for + * details on the usage of the next variable. + */ + arrived_in_time = ktime_get_ns() <= + RQ_BIC(rq)->ttime.last_end_request + + bfqd->bfq_slice_idle * 3; + + bfq_log_bfqq(bfqd, bfqq, + "bfq_add_request non-busy: " + "jiffies %lu, in_time %d, idle_long %d busyw %d " + "wr_coeff %u", + jiffies, arrived_in_time, + idle_for_long_time, + bfq_bfqq_non_blocking_wait_rq(bfqq), + old_wr_coeff); + + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + + BUG_ON(bfqq == bfqd->in_service_queue); + bfqg_stats_update_io_add(bfqq_group(RQ_BFQQ(rq)), bfqq, rq->cmd_flags); + + /* + * bfqq deserves to be weight-raised if: + * - it is sync, + * - it does not belong to a large burst, + * - it has been idle for enough time or is soft real-time, + * - is linked to a bfq_io_cq (it is not shared in any sense) + */ + in_burst = bfq_bfqq_in_large_burst(bfqq); + soft_rt = bfqd->bfq_wr_max_softrt_rate > 0 && + !in_burst && + time_is_before_jiffies(bfqq->soft_rt_next_start); + *interactive = + !in_burst && + idle_for_long_time; + wr_or_deserves_wr = bfqd->low_latency && + (bfqq->wr_coeff > 1 || + (bfq_bfqq_sync(bfqq) && + bfqq->bic && (*interactive || soft_rt))); + + bfq_log_bfqq(bfqd, bfqq, + "bfq_add_request: " + "in_burst %d, " + "soft_rt %d (next %lu), inter %d, bic %p", + bfq_bfqq_in_large_burst(bfqq), soft_rt, + bfqq->soft_rt_next_start, + *interactive, + bfqq->bic); + + /* + * Using the last flag, update budget and check whether bfqq + * may want to preempt the in-service queue. + */ + bfqq_wants_to_preempt = + bfq_bfqq_update_budg_for_activation(bfqd, bfqq, + arrived_in_time, + wr_or_deserves_wr); + + /* + * If bfqq happened to be activated in a burst, but has been + * idle for much more than an interactive queue, then we + * assume that, in the overall I/O initiated in the burst, the + * I/O associated with bfqq is finished. So bfqq does not need + * to be treated as a queue belonging to a burst + * anymore. Accordingly, we reset bfqq's in_large_burst flag + * if set, and remove bfqq from the burst list if it's + * there. We do not decrement burst_size, because the fact + * that bfqq does not need to belong to the burst list any + * more does not invalidate the fact that bfqq was created in + * a burst. + */ + if (likely(!bfq_bfqq_just_created(bfqq)) && + idle_for_long_time && + time_is_before_jiffies( + bfqq->budget_timeout + + msecs_to_jiffies(10000))) { + hlist_del_init(&bfqq->burst_list_node); + bfq_clear_bfqq_in_large_burst(bfqq); + } + + bfq_clear_bfqq_just_created(bfqq); + + if (!bfq_bfqq_IO_bound(bfqq)) { + if (arrived_in_time) { + bfqq->requests_within_timer++; + if (bfqq->requests_within_timer >= + bfqd->bfq_requests_within_timer) + bfq_mark_bfqq_IO_bound(bfqq); + } else + bfqq->requests_within_timer = 0; + bfq_log_bfqq(bfqd, bfqq, "requests in time %d", + bfqq->requests_within_timer); + } + + if (bfqd->low_latency) { + if (unlikely(time_is_after_jiffies(bfqq->split_time))) + /* wraparound */ + bfqq->split_time = + jiffies - bfqd->bfq_wr_min_idle_time - 1; + + if (time_is_before_jiffies(bfqq->split_time + + bfqd->bfq_wr_min_idle_time)) { + bfq_update_bfqq_wr_on_rq_arrival(bfqd, bfqq, + old_wr_coeff, + wr_or_deserves_wr, + *interactive, + in_burst, + soft_rt); + + if (old_wr_coeff != bfqq->wr_coeff) + bfqq->entity.prio_changed = 1; + } + } + + bfqq->last_idle_bklogged = jiffies; + bfqq->service_from_backlogged = 0; + bfq_clear_bfqq_softrt_update(bfqq); + + bfq_add_bfqq_busy(bfqd, bfqq); + + /* + * Expire in-service queue only if preemption may be needed + * for guarantees. In this respect, the function + * next_queue_may_preempt just checks a simple, necessary + * condition, and not a sufficient condition based on + * timestamps. In fact, for the latter condition to be + * evaluated, timestamps would need first to be updated, and + * this operation is quite costly (see the comments on the + * function bfq_bfqq_update_budg_for_activation). + */ + if (bfqd->in_service_queue && bfqq_wants_to_preempt && + bfqd->in_service_queue->wr_coeff < bfqq->wr_coeff && + next_queue_may_preempt(bfqd)) { + struct bfq_queue *in_serv = + bfqd->in_service_queue; + BUG_ON(in_serv == bfqq); + + bfq_bfqq_expire(bfqd, bfqd->in_service_queue, + false, BFQ_BFQQ_PREEMPTED); + } +} + +static void bfq_add_request(struct request *rq) +{ + struct bfq_queue *bfqq = RQ_BFQQ(rq); + struct bfq_data *bfqd = bfqq->bfqd; + struct request *next_rq, *prev; + unsigned int old_wr_coeff = bfqq->wr_coeff; + bool interactive = false; + + bfq_log_bfqq(bfqd, bfqq, "add_request: size %u %s", + blk_rq_sectors(rq), rq_is_sync(rq) ? "S" : "A"); + + if (bfqq->wr_coeff > 1) /* queue is being weight-raised */ + bfq_log_bfqq(bfqd, bfqq, + "raising period dur %u/%u msec, old coeff %u, w %d(%d)", + jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish), + jiffies_to_msecs(bfqq->wr_cur_max_time), + bfqq->wr_coeff, + bfqq->entity.weight, bfqq->entity.orig_weight); + + bfqq->queued[rq_is_sync(rq)]++; + bfqd->queued++; + + elv_rb_add(&bfqq->sort_list, rq); + + /* + * Check if this request is a better next-to-serve candidate. + */ + prev = bfqq->next_rq; + next_rq = bfq_choose_req(bfqd, bfqq->next_rq, rq, bfqd->last_position); + BUG_ON(!next_rq); + bfqq->next_rq = next_rq; + + /* + * Adjust priority tree position, if next_rq changes. + */ + if (prev != bfqq->next_rq) + bfq_pos_tree_add_move(bfqd, bfqq); + + if (!bfq_bfqq_busy(bfqq)) /* switching to busy ... */ + bfq_bfqq_handle_idle_busy_switch(bfqd, bfqq, old_wr_coeff, + rq, &interactive); + else { + if (bfqd->low_latency && old_wr_coeff == 1 && !rq_is_sync(rq) && + time_is_before_jiffies( + bfqq->last_wr_start_finish + + bfqd->bfq_wr_min_inter_arr_async)) { + bfqq->wr_coeff = bfqd->bfq_wr_coeff; + bfqq->wr_cur_max_time = bfq_wr_duration(bfqd); + + bfqd->wr_busy_queues++; + BUG_ON(bfqd->wr_busy_queues > bfqd->busy_queues); + bfqq->entity.prio_changed = 1; + bfq_log_bfqq(bfqd, bfqq, + "non-idle wrais starting, " + "wr_max_time %u wr_busy %d", + jiffies_to_msecs(bfqq->wr_cur_max_time), + bfqd->wr_busy_queues); + } + if (prev != bfqq->next_rq) + bfq_updated_next_req(bfqd, bfqq); + } + + /* + * Assign jiffies to last_wr_start_finish in the following + * cases: + * + * . if bfqq is not going to be weight-raised, because, for + * non weight-raised queues, last_wr_start_finish stores the + * arrival time of the last request; as of now, this piece + * of information is used only for deciding whether to + * weight-raise async queues + * + * . if bfqq is not weight-raised, because, if bfqq is now + * switching to weight-raised, then last_wr_start_finish + * stores the time when weight-raising starts + * + * . if bfqq is interactive, because, regardless of whether + * bfqq is currently weight-raised, the weight-raising + * period must start or restart (this case is considered + * separately because it is not detected by the above + * conditions, if bfqq is already weight-raised) + * + * last_wr_start_finish has to be updated also if bfqq is soft + * real-time, because the weight-raising period is constantly + * restarted on idle-to-busy transitions for these queues, but + * this is already done in bfq_bfqq_handle_idle_busy_switch if + * needed. + */ + if (bfqd->low_latency && + (old_wr_coeff == 1 || bfqq->wr_coeff == 1 || interactive)) + bfqq->last_wr_start_finish = jiffies; +} + +static struct request *bfq_find_rq_fmerge(struct bfq_data *bfqd, + struct bio *bio) +{ + struct task_struct *tsk = current; + struct bfq_io_cq *bic; + struct bfq_queue *bfqq; + + bic = bfq_bic_lookup(bfqd, tsk->io_context); + if (!bic) + return NULL; + + bfqq = bic_to_bfqq(bic, op_is_sync(bio->bi_opf)); + if (bfqq) + return elv_rb_find(&bfqq->sort_list, bio_end_sector(bio)); + + return NULL; +} + +static sector_t get_sdist(sector_t last_pos, struct request *rq) +{ + sector_t sdist = 0; + + if (last_pos) { + if (last_pos < blk_rq_pos(rq)) + sdist = blk_rq_pos(rq) - last_pos; + else + sdist = last_pos - blk_rq_pos(rq); + } + + return sdist; +} + +static void bfq_activate_request(struct request_queue *q, struct request *rq) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + bfqd->rq_in_driver++; +} + +static void bfq_deactivate_request(struct request_queue *q, struct request *rq) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + + BUG_ON(bfqd->rq_in_driver == 0); + bfqd->rq_in_driver--; +} + +static void bfq_remove_request(struct request *rq) +{ + struct bfq_queue *bfqq = RQ_BFQQ(rq); + struct bfq_data *bfqd = bfqq->bfqd; + const int sync = rq_is_sync(rq); + + BUG_ON(bfqq->entity.service > bfqq->entity.budget && + bfqq == bfqd->in_service_queue); + + if (bfqq->next_rq == rq) { + bfqq->next_rq = bfq_find_next_rq(bfqd, bfqq, rq); + bfq_updated_next_req(bfqd, bfqq); + } + + if (rq->queuelist.prev != &rq->queuelist) + list_del_init(&rq->queuelist); + BUG_ON(bfqq->queued[sync] == 0); + bfqq->queued[sync]--; + bfqd->queued--; + elv_rb_del(&bfqq->sort_list, rq); + + if (RB_EMPTY_ROOT(&bfqq->sort_list)) { + bfqq->next_rq = NULL; + + BUG_ON(bfqq->entity.budget < 0); + + if (bfq_bfqq_busy(bfqq) && bfqq != bfqd->in_service_queue) { + BUG_ON(bfqq->ref < 2); /* referred by rq and on tree */ + bfq_del_bfqq_busy(bfqd, bfqq, false); + /* + * bfqq emptied. In normal operation, when + * bfqq is empty, bfqq->entity.service and + * bfqq->entity.budget must contain, + * respectively, the service received and the + * budget used last time bfqq emptied. These + * facts do not hold in this case, as at least + * this last removal occurred while bfqq is + * not in service. To avoid inconsistencies, + * reset both bfqq->entity.service and + * bfqq->entity.budget, if bfqq has still a + * process that may issue I/O requests to it. + */ + bfqq->entity.budget = bfqq->entity.service = 0; + } + + /* + * Remove queue from request-position tree as it is empty. + */ + if (bfqq->pos_root) { + rb_erase(&bfqq->pos_node, bfqq->pos_root); + bfqq->pos_root = NULL; + } + } + + if (rq->cmd_flags & REQ_META) { + BUG_ON(bfqq->meta_pending == 0); + bfqq->meta_pending--; + } + bfqg_stats_update_io_remove(bfqq_group(bfqq), rq->cmd_flags); +} + +static enum elv_merge bfq_merge(struct request_queue *q, struct request **req, + struct bio *bio) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + struct request *__rq; + + __rq = bfq_find_rq_fmerge(bfqd, bio); + if (__rq && elv_bio_merge_ok(__rq, bio)) { + *req = __rq; + return ELEVATOR_FRONT_MERGE; + } + + return ELEVATOR_NO_MERGE; +} + +static void bfq_merged_request(struct request_queue *q, struct request *req, + enum elv_merge type) +{ + if (type == ELEVATOR_FRONT_MERGE && + rb_prev(&req->rb_node) && + blk_rq_pos(req) < + blk_rq_pos(container_of(rb_prev(&req->rb_node), + struct request, rb_node))) { + struct bfq_queue *bfqq = RQ_BFQQ(req); + struct bfq_data *bfqd = bfqq->bfqd; + struct request *prev, *next_rq; + + /* Reposition request in its sort_list */ + elv_rb_del(&bfqq->sort_list, req); + elv_rb_add(&bfqq->sort_list, req); + /* Choose next request to be served for bfqq */ + prev = bfqq->next_rq; + next_rq = bfq_choose_req(bfqd, bfqq->next_rq, req, + bfqd->last_position); + BUG_ON(!next_rq); + bfqq->next_rq = next_rq; + /* + * If next_rq changes, update both the queue's budget to + * fit the new request and the queue's position in its + * rq_pos_tree. + */ + if (prev != bfqq->next_rq) { + bfq_updated_next_req(bfqd, bfqq); + bfq_pos_tree_add_move(bfqd, bfqq); + } + } +} + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +static void bfq_bio_merged(struct request_queue *q, struct request *req, + struct bio *bio) +{ + bfqg_stats_update_io_merged(bfqq_group(RQ_BFQQ(req)), bio->bi_opf); +} +#endif + +static void bfq_merged_requests(struct request_queue *q, struct request *rq, + struct request *next) +{ + struct bfq_queue *bfqq = RQ_BFQQ(rq), *next_bfqq = RQ_BFQQ(next); + + /* + * If next and rq belong to the same bfq_queue and next is older + * than rq, then reposition rq in the fifo (by substituting next + * with rq). Otherwise, if next and rq belong to different + * bfq_queues, never reposition rq: in fact, we would have to + * reposition it with respect to next's position in its own fifo, + * which would most certainly be too expensive with respect to + * the benefits. + */ + if (bfqq == next_bfqq && + !list_empty(&rq->queuelist) && !list_empty(&next->queuelist) && + next->fifo_time < rq->fifo_time) { + list_del_init(&rq->queuelist); + list_replace_init(&next->queuelist, &rq->queuelist); + rq->fifo_time = next->fifo_time; + } + + if (bfqq->next_rq == next) + bfqq->next_rq = rq; + + bfq_remove_request(next); + bfqg_stats_update_io_merged(bfqq_group(bfqq), next->cmd_flags); +} + +/* Must be called with bfqq != NULL */ +static void bfq_bfqq_end_wr(struct bfq_queue *bfqq) +{ + BUG_ON(!bfqq); + + if (bfq_bfqq_busy(bfqq)) { + bfqq->bfqd->wr_busy_queues--; + BUG_ON(bfqq->bfqd->wr_busy_queues < 0); + } + bfqq->wr_coeff = 1; + bfqq->wr_cur_max_time = 0; + bfqq->last_wr_start_finish = jiffies; + /* + * Trigger a weight change on the next invocation of + * __bfq_entity_update_weight_prio. + */ + bfqq->entity.prio_changed = 1; + bfq_log_bfqq(bfqq->bfqd, bfqq, + "end_wr: wrais ending at %lu, rais_max_time %u", + bfqq->last_wr_start_finish, + jiffies_to_msecs(bfqq->wr_cur_max_time)); + bfq_log_bfqq(bfqq->bfqd, bfqq, "end_wr: wr_busy %d", + bfqq->bfqd->wr_busy_queues); +} + +static void bfq_end_wr_async_queues(struct bfq_data *bfqd, + struct bfq_group *bfqg) +{ + int i, j; + + for (i = 0; i < 2; i++) + for (j = 0; j < IOPRIO_BE_NR; j++) + if (bfqg->async_bfqq[i][j]) + bfq_bfqq_end_wr(bfqg->async_bfqq[i][j]); + if (bfqg->async_idle_bfqq) + bfq_bfqq_end_wr(bfqg->async_idle_bfqq); +} + +static void bfq_end_wr(struct bfq_data *bfqd) +{ + struct bfq_queue *bfqq; + + spin_lock_irq(bfqd->queue->queue_lock); + + list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list) + bfq_bfqq_end_wr(bfqq); + list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list) + bfq_bfqq_end_wr(bfqq); + bfq_end_wr_async(bfqd); + + spin_unlock_irq(bfqd->queue->queue_lock); +} + +static sector_t bfq_io_struct_pos(void *io_struct, bool request) +{ + if (request) + return blk_rq_pos(io_struct); + else + return ((struct bio *)io_struct)->bi_iter.bi_sector; +} + +static int bfq_rq_close_to_sector(void *io_struct, bool request, + sector_t sector) +{ + return abs(bfq_io_struct_pos(io_struct, request) - sector) <= + BFQQ_CLOSE_THR; +} + +static struct bfq_queue *bfqq_find_close(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + sector_t sector) +{ + struct rb_root *root = &bfq_bfqq_to_bfqg(bfqq)->rq_pos_tree; + struct rb_node *parent, *node; + struct bfq_queue *__bfqq; + + if (RB_EMPTY_ROOT(root)) + return NULL; + + /* + * First, if we find a request starting at the end of the last + * request, choose it. + */ + __bfqq = bfq_rq_pos_tree_lookup(bfqd, root, sector, &parent, NULL); + if (__bfqq) + return __bfqq; + + /* + * If the exact sector wasn't found, the parent of the NULL leaf + * will contain the closest sector (rq_pos_tree sorted by + * next_request position). + */ + __bfqq = rb_entry(parent, struct bfq_queue, pos_node); + if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector)) + return __bfqq; + + if (blk_rq_pos(__bfqq->next_rq) < sector) + node = rb_next(&__bfqq->pos_node); + else + node = rb_prev(&__bfqq->pos_node); + if (!node) + return NULL; + + __bfqq = rb_entry(node, struct bfq_queue, pos_node); + if (bfq_rq_close_to_sector(__bfqq->next_rq, true, sector)) + return __bfqq; + + return NULL; +} + +static struct bfq_queue *bfq_find_close_cooperator(struct bfq_data *bfqd, + struct bfq_queue *cur_bfqq, + sector_t sector) +{ + struct bfq_queue *bfqq; + + /* + * We shall notice if some of the queues are cooperating, + * e.g., working closely on the same area of the device. In + * that case, we can group them together and: 1) don't waste + * time idling, and 2) serve the union of their requests in + * the best possible order for throughput. + */ + bfqq = bfqq_find_close(bfqd, cur_bfqq, sector); + if (!bfqq || bfqq == cur_bfqq) + return NULL; + + return bfqq; +} + +static struct bfq_queue * +bfq_setup_merge(struct bfq_queue *bfqq, struct bfq_queue *new_bfqq) +{ + int process_refs, new_process_refs; + struct bfq_queue *__bfqq; + + /* + * If there are no process references on the new_bfqq, then it is + * unsafe to follow the ->new_bfqq chain as other bfqq's in the chain + * may have dropped their last reference (not just their last process + * reference). + */ + if (!bfqq_process_refs(new_bfqq)) + return NULL; + + /* Avoid a circular list and skip interim queue merges. */ + while ((__bfqq = new_bfqq->new_bfqq)) { + if (__bfqq == bfqq) + return NULL; + new_bfqq = __bfqq; + } + + process_refs = bfqq_process_refs(bfqq); + new_process_refs = bfqq_process_refs(new_bfqq); + /* + * If the process for the bfqq has gone away, there is no + * sense in merging the queues. + */ + if (process_refs == 0 || new_process_refs == 0) + return NULL; + + bfq_log_bfqq(bfqq->bfqd, bfqq, "scheduling merge with queue %d", + new_bfqq->pid); + + /* + * Merging is just a redirection: the requests of the process + * owning one of the two queues are redirected to the other queue. + * The latter queue, in its turn, is set as shared if this is the + * first time that the requests of some process are redirected to + * it. + * + * We redirect bfqq to new_bfqq and not the opposite, because we + * are in the context of the process owning bfqq, hence we have + * the io_cq of this process. So we can immediately configure this + * io_cq to redirect the requests of the process to new_bfqq. + * + * NOTE, even if new_bfqq coincides with the in-service queue, the + * io_cq of new_bfqq is not available, because, if the in-service + * queue is shared, bfqd->in_service_bic may not point to the + * io_cq of the in-service queue. + * Redirecting the requests of the process owning bfqq to the + * currently in-service queue is in any case the best option, as + * we feed the in-service queue with new requests close to the + * last request served and, by doing so, hopefully increase the + * throughput. + */ + bfqq->new_bfqq = new_bfqq; + new_bfqq->ref += process_refs; + return new_bfqq; +} + +static bool bfq_may_be_close_cooperator(struct bfq_queue *bfqq, + struct bfq_queue *new_bfqq) +{ + if (bfq_class_idle(bfqq) || bfq_class_idle(new_bfqq) || + (bfqq->ioprio_class != new_bfqq->ioprio_class)) + return false; + + /* + * If either of the queues has already been detected as seeky, + * then merging it with the other queue is unlikely to lead to + * sequential I/O. + */ + if (BFQQ_SEEKY(bfqq) || BFQQ_SEEKY(new_bfqq)) + return false; + + /* + * Interleaved I/O is known to be done by (some) applications + * only for reads, so it does not make sense to merge async + * queues. + */ + if (!bfq_bfqq_sync(bfqq) || !bfq_bfqq_sync(new_bfqq)) + return false; + + return true; +} + +/* + * If this function returns true, then bfqq cannot be merged. The idea + * is that true cooperation happens very early after processes start + * to do I/O. Usually, late cooperations are just accidental false + * positives. In case bfqq is weight-raised, such false positives + * would evidently degrade latency guarantees for bfqq. + */ +static bool wr_from_too_long(struct bfq_queue *bfqq) +{ + return bfqq->wr_coeff > 1 && + time_is_before_jiffies(bfqq->last_wr_start_finish + + msecs_to_jiffies(100)); +} + +/* + * Attempt to schedule a merge of bfqq with the currently in-service + * queue or with a close queue among the scheduled queues. Return + * NULL if no merge was scheduled, a pointer to the shared bfq_queue + * structure otherwise. + * + * The OOM queue is not allowed to participate to cooperation: in fact, since + * the requests temporarily redirected to the OOM queue could be redirected + * again to dedicated queues at any time, the state needed to correctly + * handle merging with the OOM queue would be quite complex and expensive + * to maintain. Besides, in such a critical condition as an out of memory, + * the benefits of queue merging may be little relevant, or even negligible. + * + * Weight-raised queues can be merged only if their weight-raising + * period has just started. In fact cooperating processes are usually + * started together. Thus, with this filter we avoid false positives + * that would jeopardize low-latency guarantees. + * + * WARNING: queue merging may impair fairness among non-weight raised + * queues, for at least two reasons: 1) the original weight of a + * merged queue may change during the merged state, 2) even being the + * weight the same, a merged queue may be bloated with many more + * requests than the ones produced by its originally-associated + * process. + */ +static struct bfq_queue * +bfq_setup_cooperator(struct bfq_data *bfqd, struct bfq_queue *bfqq, + void *io_struct, bool request) +{ + struct bfq_queue *in_service_bfqq, *new_bfqq; + + if (bfqq->new_bfqq) + return bfqq->new_bfqq; + + if (io_struct && wr_from_too_long(bfqq) && + likely(bfqq != &bfqd->oom_bfqq)) + bfq_log_bfqq(bfqd, bfqq, + "would have looked for coop, but bfq%d wr", + bfqq->pid); + + if (!io_struct || + wr_from_too_long(bfqq) || + unlikely(bfqq == &bfqd->oom_bfqq)) + return NULL; + + /* If there is only one backlogged queue, don't search. */ + if (bfqd->busy_queues == 1) + return NULL; + + in_service_bfqq = bfqd->in_service_queue; + + if (in_service_bfqq && in_service_bfqq != bfqq && + bfqd->in_service_bic && wr_from_too_long(in_service_bfqq) + && likely(in_service_bfqq == &bfqd->oom_bfqq)) + bfq_log_bfqq(bfqd, bfqq, + "would have tried merge with in-service-queue, but wr"); + + if (!in_service_bfqq || in_service_bfqq == bfqq || + !bfqd->in_service_bic || wr_from_too_long(in_service_bfqq) || + unlikely(in_service_bfqq == &bfqd->oom_bfqq)) + goto check_scheduled; + + if (bfq_rq_close_to_sector(io_struct, request, bfqd->last_position) && + bfqq->entity.parent == in_service_bfqq->entity.parent && + bfq_may_be_close_cooperator(bfqq, in_service_bfqq)) { + new_bfqq = bfq_setup_merge(bfqq, in_service_bfqq); + if (new_bfqq) + return new_bfqq; + } + /* + * Check whether there is a cooperator among currently scheduled + * queues. The only thing we need is that the bio/request is not + * NULL, as we need it to establish whether a cooperator exists. + */ +check_scheduled: + new_bfqq = bfq_find_close_cooperator(bfqd, bfqq, + bfq_io_struct_pos(io_struct, request)); + + BUG_ON(new_bfqq && bfqq->entity.parent != new_bfqq->entity.parent); + + if (new_bfqq && wr_from_too_long(new_bfqq) && + likely(new_bfqq != &bfqd->oom_bfqq) && + bfq_may_be_close_cooperator(bfqq, new_bfqq)) + bfq_log_bfqq(bfqd, bfqq, + "would have merged with bfq%d, but wr", + new_bfqq->pid); + + if (new_bfqq && !wr_from_too_long(new_bfqq) && + likely(new_bfqq != &bfqd->oom_bfqq) && + bfq_may_be_close_cooperator(bfqq, new_bfqq)) + return bfq_setup_merge(bfqq, new_bfqq); + + return NULL; +} + +static void bfq_bfqq_save_state(struct bfq_queue *bfqq) +{ + struct bfq_io_cq *bic = bfqq->bic; + + /* + * If !bfqq->bic, the queue is already shared or its requests + * have already been redirected to a shared queue; both idle window + * and weight raising state have already been saved. Do nothing. + */ + if (!bic) + return; + + bic->saved_has_short_ttime = bfq_bfqq_has_short_ttime(bfqq); + bic->saved_IO_bound = bfq_bfqq_IO_bound(bfqq); + bic->saved_in_large_burst = bfq_bfqq_in_large_burst(bfqq); + bic->was_in_burst_list = !hlist_unhashed(&bfqq->burst_list_node); + if (unlikely(bfq_bfqq_just_created(bfqq) && + !bfq_bfqq_in_large_burst(bfqq))) { + /* + * bfqq being merged ritgh after being created: bfqq + * would have deserved interactive weight raising, but + * did not make it to be set in a weight-raised state, + * because of this early merge. Store directly the + * weight-raising state that would have been assigned + * to bfqq, so that to avoid that bfqq unjustly fails + * to enjoy weight raising if split soon. + */ + bic->saved_wr_coeff = bfqq->bfqd->bfq_wr_coeff; + bic->saved_wr_cur_max_time = bfq_wr_duration(bfqq->bfqd); + bic->saved_last_wr_start_finish = jiffies; + } else { + bic->saved_wr_coeff = bfqq->wr_coeff; + bic->saved_wr_start_at_switch_to_srt = + bfqq->wr_start_at_switch_to_srt; + bic->saved_last_wr_start_finish = bfqq->last_wr_start_finish; + bic->saved_wr_cur_max_time = bfqq->wr_cur_max_time; + } + BUG_ON(time_is_after_jiffies(bfqq->last_wr_start_finish)); +} + +static void bfq_get_bic_reference(struct bfq_queue *bfqq) +{ + /* + * If bfqq->bic has a non-NULL value, the bic to which it belongs + * is about to begin using a shared bfq_queue. + */ + if (bfqq->bic) + atomic_long_inc(&bfqq->bic->icq.ioc->refcount); +} + +static void +bfq_merge_bfqqs(struct bfq_data *bfqd, struct bfq_io_cq *bic, + struct bfq_queue *bfqq, struct bfq_queue *new_bfqq) +{ + bfq_log_bfqq(bfqd, bfqq, "merging with queue %lu", + (unsigned long) new_bfqq->pid); + /* Save weight raising and idle window of the merged queues */ + bfq_bfqq_save_state(bfqq); + bfq_bfqq_save_state(new_bfqq); + if (bfq_bfqq_IO_bound(bfqq)) + bfq_mark_bfqq_IO_bound(new_bfqq); + bfq_clear_bfqq_IO_bound(bfqq); + + /* + * If bfqq is weight-raised, then let new_bfqq inherit + * weight-raising. To reduce false positives, neglect the case + * where bfqq has just been created, but has not yet made it + * to be weight-raised (which may happen because EQM may merge + * bfqq even before bfq_add_request is executed for the first + * time for bfqq). Handling this case would however be very + * easy, thanks to the flag just_created. + */ + if (new_bfqq->wr_coeff == 1 && bfqq->wr_coeff > 1) { + new_bfqq->wr_coeff = bfqq->wr_coeff; + new_bfqq->wr_cur_max_time = bfqq->wr_cur_max_time; + new_bfqq->last_wr_start_finish = bfqq->last_wr_start_finish; + new_bfqq->wr_start_at_switch_to_srt = + bfqq->wr_start_at_switch_to_srt; + if (bfq_bfqq_busy(new_bfqq)) { + bfqd->wr_busy_queues++; + BUG_ON(bfqd->wr_busy_queues > bfqd->busy_queues); + } + + new_bfqq->entity.prio_changed = 1; + bfq_log_bfqq(bfqd, new_bfqq, + "wr start after merge with %d, rais_max_time %u", + bfqq->pid, + jiffies_to_msecs(bfqq->wr_cur_max_time)); + } + + if (bfqq->wr_coeff > 1) { /* bfqq has given its wr to new_bfqq */ + bfqq->wr_coeff = 1; + bfqq->entity.prio_changed = 1; + if (bfq_bfqq_busy(bfqq)) { + bfqd->wr_busy_queues--; + BUG_ON(bfqd->wr_busy_queues < 0); + } + + } + + bfq_log_bfqq(bfqd, new_bfqq, "merge_bfqqs: wr_busy %d", + bfqd->wr_busy_queues); + + /* + * Grab a reference to the bic, to prevent it from being destroyed + * before being possibly touched by a bfq_split_bfqq(). + */ + bfq_get_bic_reference(bfqq); + bfq_get_bic_reference(new_bfqq); + /* + * Merge queues (that is, let bic redirect its requests to new_bfqq) + */ + bic_set_bfqq(bic, new_bfqq, 1); + bfq_mark_bfqq_coop(new_bfqq); + /* + * new_bfqq now belongs to at least two bics (it is a shared queue): + * set new_bfqq->bic to NULL. bfqq either: + * - does not belong to any bic any more, and hence bfqq->bic must + * be set to NULL, or + * - is a queue whose owning bics have already been redirected to a + * different queue, hence the queue is destined to not belong to + * any bic soon and bfqq->bic is already NULL (therefore the next + * assignment causes no harm). + */ + new_bfqq->bic = NULL; + bfqq->bic = NULL; + /* release process reference to bfqq */ + bfq_put_queue(bfqq); +} + +static int bfq_allow_bio_merge(struct request_queue *q, struct request *rq, + struct bio *bio) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + bool is_sync = op_is_sync(bio->bi_opf); + struct bfq_io_cq *bic; + struct bfq_queue *bfqq, *new_bfqq; + + /* + * Disallow merge of a sync bio into an async request. + */ + if (is_sync && !rq_is_sync(rq)) + return false; + + /* + * Lookup the bfqq that this bio will be queued with. Allow + * merge only if rq is queued there. + * Queue lock is held here. + */ + bic = bfq_bic_lookup(bfqd, current->io_context); + if (!bic) + return false; + + bfqq = bic_to_bfqq(bic, is_sync); + /* + * We take advantage of this function to perform an early merge + * of the queues of possible cooperating processes. + */ + if (bfqq) { + new_bfqq = bfq_setup_cooperator(bfqd, bfqq, bio, false); + if (new_bfqq) { + bfq_merge_bfqqs(bfqd, bic, bfqq, new_bfqq); + /* + * If we get here, the bio will be queued in the + * shared queue, i.e., new_bfqq, so use new_bfqq + * to decide whether bio and rq can be merged. + */ + bfqq = new_bfqq; + } + } + + return bfqq == RQ_BFQQ(rq); +} + +static int bfq_allow_rq_merge(struct request_queue *q, struct request *rq, + struct request *next) +{ + return RQ_BFQQ(rq) == RQ_BFQQ(next); +} + +/* + * Set the maximum time for the in-service queue to consume its + * budget. This prevents seeky processes from lowering the throughput. + * In practice, a time-slice service scheme is used with seeky + * processes. + */ +static void bfq_set_budget_timeout(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + unsigned int timeout_coeff; + + if (bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time) + timeout_coeff = 1; + else + timeout_coeff = bfqq->entity.weight / bfqq->entity.orig_weight; + + bfqd->last_budget_start = ktime_get(); + + bfqq->budget_timeout = jiffies + + bfqd->bfq_timeout * timeout_coeff; + + bfq_log_bfqq(bfqd, bfqq, "set budget_timeout %u", + jiffies_to_msecs(bfqd->bfq_timeout * timeout_coeff)); +} + +static void __bfq_set_in_service_queue(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + if (bfqq) { + bfqg_stats_update_avg_queue_size(bfqq_group(bfqq)); + bfq_mark_bfqq_must_alloc(bfqq); + bfq_clear_bfqq_fifo_expire(bfqq); + + bfqd->budgets_assigned = (bfqd->budgets_assigned*7 + 256) / 8; + + BUG_ON(bfqq == bfqd->in_service_queue); + BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list)); + + if (time_is_before_jiffies(bfqq->last_wr_start_finish) && + bfqq->wr_coeff > 1 && + bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time && + time_is_before_jiffies(bfqq->budget_timeout)) { + /* + * For soft real-time queues, move the start + * of the weight-raising period forward by the + * time the queue has not received any + * service. Otherwise, a relatively long + * service delay is likely to cause the + * weight-raising period of the queue to end, + * because of the short duration of the + * weight-raising period of a soft real-time + * queue. It is worth noting that this move + * is not so dangerous for the other queues, + * because soft real-time queues are not + * greedy. + * + * To not add a further variable, we use the + * overloaded field budget_timeout to + * determine for how long the queue has not + * received service, i.e., how much time has + * elapsed since the queue expired. However, + * this is a little imprecise, because + * budget_timeout is set to jiffies if bfqq + * not only expires, but also remains with no + * request. + */ + if (time_after(bfqq->budget_timeout, + bfqq->last_wr_start_finish)) + bfqq->last_wr_start_finish += + jiffies - bfqq->budget_timeout; + else + bfqq->last_wr_start_finish = jiffies; + + if (time_is_after_jiffies(bfqq->last_wr_start_finish)) { + pr_crit( + "BFQ WARNING:last %lu budget %lu jiffies %lu", + bfqq->last_wr_start_finish, + bfqq->budget_timeout, + jiffies); + pr_crit("diff %lu", jiffies - + max_t(unsigned long, + bfqq->last_wr_start_finish, + bfqq->budget_timeout)); + bfqq->last_wr_start_finish = jiffies; + } + } + + bfq_set_budget_timeout(bfqd, bfqq); + bfq_log_bfqq(bfqd, bfqq, + "set_in_service_queue, cur-budget = %d", + bfqq->entity.budget); + } else + bfq_log(bfqd, "set_in_service_queue: NULL"); + + bfqd->in_service_queue = bfqq; +} + +/* + * Get and set a new queue for service. + */ +static struct bfq_queue *bfq_set_in_service_queue(struct bfq_data *bfqd) +{ + struct bfq_queue *bfqq = bfq_get_next_queue(bfqd); + + __bfq_set_in_service_queue(bfqd, bfqq); + return bfqq; +} + +static void bfq_arm_slice_timer(struct bfq_data *bfqd) +{ + struct bfq_queue *bfqq = bfqd->in_service_queue; + struct bfq_io_cq *bic; + u32 sl; + + BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list)); + + /* Processes have exited, don't wait. */ + bic = bfqd->in_service_bic; + if (!bic || atomic_read(&bic->icq.ioc->active_ref) == 0) + return; + + bfq_mark_bfqq_wait_request(bfqq); + + /* + * We don't want to idle for seeks, but we do want to allow + * fair distribution of slice time for a process doing back-to-back + * seeks. So allow a little bit of time for him to submit a new rq. + * + * To prevent processes with (partly) seeky workloads from + * being too ill-treated, grant them a small fraction of the + * assigned budget before reducing the waiting time to + * BFQ_MIN_TT. This happened to help reduce latency. + */ + sl = bfqd->bfq_slice_idle; + /* + * Unless the queue is being weight-raised or the scenario is + * asymmetric, grant only minimum idle time if the queue + * is seeky. A long idling is preserved for a weight-raised + * queue, or, more in general, in an asymemtric scenario, + * because a long idling is needed for guaranteeing to a queue + * its reserved share of the throughput (in particular, it is + * needed if the queue has a higher weight than some other + * queue). + */ + if (BFQQ_SEEKY(bfqq) && bfqq->wr_coeff == 1 && + bfq_symmetric_scenario(bfqd)) + sl = min_t(u32, sl, BFQ_MIN_TT); + + bfqd->last_idling_start = ktime_get(); + hrtimer_start(&bfqd->idle_slice_timer, ns_to_ktime(sl), + HRTIMER_MODE_REL); + bfqg_stats_set_start_idle_time(bfqq_group(bfqq)); + bfq_log(bfqd, "arm idle: %ld/%ld ms", + sl / NSEC_PER_MSEC, bfqd->bfq_slice_idle / NSEC_PER_MSEC); +} + +/* + * In autotuning mode, max_budget is dynamically recomputed as the + * amount of sectors transferred in timeout at the estimated peak + * rate. This enables BFQ to utilize a full timeslice with a full + * budget, even if the in-service queue is served at peak rate. And + * this maximises throughput with sequential workloads. + */ +static unsigned long bfq_calc_max_budget(struct bfq_data *bfqd) +{ + return (u64)bfqd->peak_rate * USEC_PER_MSEC * + jiffies_to_msecs(bfqd->bfq_timeout)>>BFQ_RATE_SHIFT; +} + +/* + * Update parameters related to throughput and responsiveness, as a + * function of the estimated peak rate. See comments on + * bfq_calc_max_budget(), and on T_slow and T_fast arrays. + */ +static void update_thr_responsiveness_params(struct bfq_data *bfqd) +{ + int dev_type = blk_queue_nonrot(bfqd->queue); + + if (bfqd->bfq_user_max_budget == 0) { + bfqd->bfq_max_budget = + bfq_calc_max_budget(bfqd); + BUG_ON(bfqd->bfq_max_budget < 0); + bfq_log(bfqd, "new max_budget = %d", + bfqd->bfq_max_budget); + } + + if (bfqd->device_speed == BFQ_BFQD_FAST && + bfqd->peak_rate < device_speed_thresh[dev_type]) { + bfqd->device_speed = BFQ_BFQD_SLOW; + bfqd->RT_prod = R_slow[dev_type] * + T_slow[dev_type]; + } else if (bfqd->device_speed == BFQ_BFQD_SLOW && + bfqd->peak_rate > device_speed_thresh[dev_type]) { + bfqd->device_speed = BFQ_BFQD_FAST; + bfqd->RT_prod = R_fast[dev_type] * + T_fast[dev_type]; + } + + bfq_log(bfqd, +"dev_type %s dev_speed_class = %s (%llu sects/sec), thresh %llu setcs/sec", + dev_type == 0 ? "ROT" : "NONROT", + bfqd->device_speed == BFQ_BFQD_FAST ? "FAST" : "SLOW", + bfqd->device_speed == BFQ_BFQD_FAST ? + (USEC_PER_SEC*(u64)R_fast[dev_type])>>BFQ_RATE_SHIFT : + (USEC_PER_SEC*(u64)R_slow[dev_type])>>BFQ_RATE_SHIFT, + (USEC_PER_SEC*(u64)device_speed_thresh[dev_type])>> + BFQ_RATE_SHIFT); +} + +static void bfq_reset_rate_computation(struct bfq_data *bfqd, struct request *rq) +{ + if (rq != NULL) { /* new rq dispatch now, reset accordingly */ + bfqd->last_dispatch = bfqd->first_dispatch = ktime_get_ns() ; + bfqd->peak_rate_samples = 1; + bfqd->sequential_samples = 0; + bfqd->tot_sectors_dispatched = bfqd->last_rq_max_size = + blk_rq_sectors(rq); + } else /* no new rq dispatched, just reset the number of samples */ + bfqd->peak_rate_samples = 0; /* full re-init on next disp. */ + + bfq_log(bfqd, + "reset_rate_computation at end, sample %u/%u tot_sects %llu", + bfqd->peak_rate_samples, bfqd->sequential_samples, + bfqd->tot_sectors_dispatched); +} + +static void bfq_update_rate_reset(struct bfq_data *bfqd, struct request *rq) +{ + u32 rate, weight, divisor; + + /* + * For the convergence property to hold (see comments on + * bfq_update_peak_rate()) and for the assessment to be + * reliable, a minimum number of samples must be present, and + * a minimum amount of time must have elapsed. If not so, do + * not compute new rate. Just reset parameters, to get ready + * for a new evaluation attempt. + */ + if (bfqd->peak_rate_samples < BFQ_RATE_MIN_SAMPLES || + bfqd->delta_from_first < BFQ_RATE_MIN_INTERVAL) { + bfq_log(bfqd, + "update_rate_reset: only resetting, delta_first %lluus samples %d", + bfqd->delta_from_first>>10, bfqd->peak_rate_samples); + goto reset_computation; + } + + /* + * If a new request completion has occurred after last + * dispatch, then, to approximate the rate at which requests + * have been served by the device, it is more precise to + * extend the observation interval to the last completion. + */ + bfqd->delta_from_first = + max_t(u64, bfqd->delta_from_first, + bfqd->last_completion - bfqd->first_dispatch); + + BUG_ON(bfqd->delta_from_first == 0); + /* + * Rate computed in sects/usec, and not sects/nsec, for + * precision issues. + */ + rate = div64_ul(bfqd->tot_sectors_dispatched<delta_from_first, NSEC_PER_USEC)); + + bfq_log(bfqd, +"update_rate_reset: tot_sects %llu delta_first %lluus rate %llu sects/s (%d)", + bfqd->tot_sectors_dispatched, bfqd->delta_from_first>>10, + ((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT), + rate > 20< 20M sectors/sec) + */ + if ((bfqd->sequential_samples < (3 * bfqd->peak_rate_samples)>>2 && + rate <= bfqd->peak_rate) || + rate > 20<peak_rate_samples, bfqd->sequential_samples, + ((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT), + ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT)); + goto reset_computation; + } else { + bfq_log(bfqd, + "update_rate_reset: do update, samples %u/%u rate/peak %llu/%llu", + bfqd->peak_rate_samples, bfqd->sequential_samples, + ((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT), + ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT)); + } + + /* + * We have to update the peak rate, at last! To this purpose, + * we use a low-pass filter. We compute the smoothing constant + * of the filter as a function of the 'weight' of the new + * measured rate. + * + * As can be seen in next formulas, we define this weight as a + * quantity proportional to how sequential the workload is, + * and to how long the observation time interval is. + * + * The weight runs from 0 to 8. The maximum value of the + * weight, 8, yields the minimum value for the smoothing + * constant. At this minimum value for the smoothing constant, + * the measured rate contributes for half of the next value of + * the estimated peak rate. + * + * So, the first step is to compute the weight as a function + * of how sequential the workload is. Note that the weight + * cannot reach 9, because bfqd->sequential_samples cannot + * become equal to bfqd->peak_rate_samples, which, in its + * turn, holds true because bfqd->sequential_samples is not + * incremented for the first sample. + */ + weight = (9 * bfqd->sequential_samples) / bfqd->peak_rate_samples; + + /* + * Second step: further refine the weight as a function of the + * duration of the observation interval. + */ + weight = min_t(u32, 8, + div_u64(weight * bfqd->delta_from_first, + BFQ_RATE_REF_INTERVAL)); + + /* + * Divisor ranging from 10, for minimum weight, to 2, for + * maximum weight. + */ + divisor = 10 - weight; + BUG_ON(divisor == 0); + + /* + * Finally, update peak rate: + * + * peak_rate = peak_rate * (divisor-1) / divisor + rate / divisor + */ + bfqd->peak_rate *= divisor-1; + bfqd->peak_rate /= divisor; + rate /= divisor; /* smoothing constant alpha = 1/divisor */ + + bfq_log(bfqd, + "update_rate_reset: divisor %d tmp_peak_rate %llu tmp_rate %u", + divisor, + ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT), + (u32)((USEC_PER_SEC*(u64)rate)>>BFQ_RATE_SHIFT)); + + BUG_ON(bfqd->peak_rate == 0); + BUG_ON(bfqd->peak_rate > 20<peak_rate += rate; + update_thr_responsiveness_params(bfqd); + BUG_ON(bfqd->peak_rate > 20<peak_rate_samples == 0) { /* first dispatch */ + bfq_log(bfqd, + "update_peak_rate: goto reset, samples %d", + bfqd->peak_rate_samples) ; + bfq_reset_rate_computation(bfqd, rq); + goto update_last_values; /* will add one sample */ + } + + /* + * Device idle for very long: the observation interval lasting + * up to this dispatch cannot be a valid observation interval + * for computing a new peak rate (similarly to the late- + * completion event in bfq_completed_request()). Go to + * update_rate_and_reset to have the following three steps + * taken: + * - close the observation interval at the last (previous) + * request dispatch or completion + * - compute rate, if possible, for that observation interval + * - start a new observation interval with this dispatch + */ + if (now_ns - bfqd->last_dispatch > 100*NSEC_PER_MSEC && + bfqd->rq_in_driver == 0) { + bfq_log(bfqd, +"update_peak_rate: jumping to updating&resetting delta_last %lluus samples %d", + (now_ns - bfqd->last_dispatch)>>10, + bfqd->peak_rate_samples) ; + goto update_rate_and_reset; + } + + /* Update sampling information */ + bfqd->peak_rate_samples++; + + if ((bfqd->rq_in_driver > 0 || + now_ns - bfqd->last_completion < BFQ_MIN_TT) + && get_sdist(bfqd->last_position, rq) < BFQQ_SEEK_THR) + bfqd->sequential_samples++; + + bfqd->tot_sectors_dispatched += blk_rq_sectors(rq); + + /* Reset max observed rq size every 32 dispatches */ + if (likely(bfqd->peak_rate_samples % 32)) + bfqd->last_rq_max_size = + max_t(u32, blk_rq_sectors(rq), bfqd->last_rq_max_size); + else + bfqd->last_rq_max_size = blk_rq_sectors(rq); + + bfqd->delta_from_first = now_ns - bfqd->first_dispatch; + + bfq_log(bfqd, + "update_peak_rate: added samples %u/%u tot_sects %llu delta_first %lluus", + bfqd->peak_rate_samples, bfqd->sequential_samples, + bfqd->tot_sectors_dispatched, + bfqd->delta_from_first>>10); + + /* Target observation interval not yet reached, go on sampling */ + if (bfqd->delta_from_first < BFQ_RATE_REF_INTERVAL) + goto update_last_values; + +update_rate_and_reset: + bfq_update_rate_reset(bfqd, rq); +update_last_values: + bfqd->last_position = blk_rq_pos(rq) + blk_rq_sectors(rq); + bfqd->last_dispatch = now_ns; + + bfq_log(bfqd, + "update_peak_rate: delta_first %lluus last_pos %llu peak_rate %llu", + (now_ns - bfqd->first_dispatch)>>10, + (unsigned long long) bfqd->last_position, + ((USEC_PER_SEC*(u64)bfqd->peak_rate)>>BFQ_RATE_SHIFT)); + bfq_log(bfqd, + "update_peak_rate: samples at end %d", bfqd->peak_rate_samples); +} + +/* + * Move request from internal lists to the dispatch list of the request queue + */ +static void bfq_dispatch_insert(struct request_queue *q, struct request *rq) +{ + struct bfq_queue *bfqq = RQ_BFQQ(rq); + + /* + * For consistency, the next instruction should have been executed + * after removing the request from the queue and dispatching it. + * We execute instead this instruction before bfq_remove_request() + * (and hence introduce a temporary inconsistency), for efficiency. + * In fact, in a forced_dispatch, this prevents two counters related + * to bfqq->dispatched to risk to be uselessly decremented if bfqq + * is not in service, and then to be incremented again after + * incrementing bfqq->dispatched. + */ + bfqq->dispatched++; + bfq_update_peak_rate(q->elevator->elevator_data, rq); + + bfq_remove_request(rq); + elv_dispatch_sort(q, rq); +} + +static void __bfq_bfqq_expire(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + BUG_ON(bfqq != bfqd->in_service_queue); + + /* + * If this bfqq is shared between multiple processes, check + * to make sure that those processes are still issuing I/Os + * within the mean seek distance. If not, it may be time to + * break the queues apart again. + */ + if (bfq_bfqq_coop(bfqq) && BFQQ_SEEKY(bfqq)) + bfq_mark_bfqq_split_coop(bfqq); + + if (RB_EMPTY_ROOT(&bfqq->sort_list)) { + if (bfqq->dispatched == 0) + /* + * Overloading budget_timeout field to store + * the time at which the queue remains with no + * backlog and no outstanding request; used by + * the weight-raising mechanism. + */ + bfqq->budget_timeout = jiffies; + + bfq_del_bfqq_busy(bfqd, bfqq, true); + } else { + bfq_requeue_bfqq(bfqd, bfqq, true); + /* + * Resort priority tree of potential close cooperators. + */ + bfq_pos_tree_add_move(bfqd, bfqq); + } + + /* + * All in-service entities must have been properly deactivated + * or requeued before executing the next function, which + * resets all in-service entites as no more in service. + */ + __bfq_bfqd_reset_in_service(bfqd); +} + +/** + * __bfq_bfqq_recalc_budget - try to adapt the budget to the @bfqq behavior. + * @bfqd: device data. + * @bfqq: queue to update. + * @reason: reason for expiration. + * + * Handle the feedback on @bfqq budget at queue expiration. + * See the body for detailed comments. + */ +static void __bfq_bfqq_recalc_budget(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + enum bfqq_expiration reason) +{ + struct request *next_rq; + int budget, min_budget; + + BUG_ON(bfqq != bfqd->in_service_queue); + + min_budget = bfq_min_budget(bfqd); + + if (bfqq->wr_coeff == 1) + budget = bfqq->max_budget; + else /* + * Use a constant, low budget for weight-raised queues, + * to help achieve a low latency. Keep it slightly higher + * than the minimum possible budget, to cause a little + * bit fewer expirations. + */ + budget = 2 * min_budget; + + bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last budg %d, budg left %d", + bfqq->entity.budget, bfq_bfqq_budget_left(bfqq)); + bfq_log_bfqq(bfqd, bfqq, "recalc_budg: last max_budg %d, min budg %d", + budget, bfq_min_budget(bfqd)); + bfq_log_bfqq(bfqd, bfqq, "recalc_budg: sync %d, seeky %d", + bfq_bfqq_sync(bfqq), BFQQ_SEEKY(bfqd->in_service_queue)); + + if (bfq_bfqq_sync(bfqq) && bfqq->wr_coeff == 1) { + switch (reason) { + /* + * Caveat: in all the following cases we trade latency + * for throughput. + */ + case BFQ_BFQQ_TOO_IDLE: + /* + * This is the only case where we may reduce + * the budget: if there is no request of the + * process still waiting for completion, then + * we assume (tentatively) that the timer has + * expired because the batch of requests of + * the process could have been served with a + * smaller budget. Hence, betting that + * process will behave in the same way when it + * becomes backlogged again, we reduce its + * next budget. As long as we guess right, + * this budget cut reduces the latency + * experienced by the process. + * + * However, if there are still outstanding + * requests, then the process may have not yet + * issued its next request just because it is + * still waiting for the completion of some of + * the still outstanding ones. So in this + * subcase we do not reduce its budget, on the + * contrary we increase it to possibly boost + * the throughput, as discussed in the + * comments to the BUDGET_TIMEOUT case. + */ + if (bfqq->dispatched > 0) /* still outstanding reqs */ + budget = min(budget * 2, bfqd->bfq_max_budget); + else { + if (budget > 5 * min_budget) + budget -= 4 * min_budget; + else + budget = min_budget; + } + break; + case BFQ_BFQQ_BUDGET_TIMEOUT: + /* + * We double the budget here because it gives + * the chance to boost the throughput if this + * is not a seeky process (and has bumped into + * this timeout because of, e.g., ZBR). + */ + budget = min(budget * 2, bfqd->bfq_max_budget); + break; + case BFQ_BFQQ_BUDGET_EXHAUSTED: + /* + * The process still has backlog, and did not + * let either the budget timeout or the disk + * idling timeout expire. Hence it is not + * seeky, has a short thinktime and may be + * happy with a higher budget too. So + * definitely increase the budget of this good + * candidate to boost the disk throughput. + */ + budget = min(budget * 4, bfqd->bfq_max_budget); + break; + case BFQ_BFQQ_NO_MORE_REQUESTS: + /* + * For queues that expire for this reason, it + * is particularly important to keep the + * budget close to the actual service they + * need. Doing so reduces the timestamp + * misalignment problem described in the + * comments in the body of + * __bfq_activate_entity. In fact, suppose + * that a queue systematically expires for + * BFQ_BFQQ_NO_MORE_REQUESTS and presents a + * new request in time to enjoy timestamp + * back-shifting. The larger the budget of the + * queue is with respect to the service the + * queue actually requests in each service + * slot, the more times the queue can be + * reactivated with the same virtual finish + * time. It follows that, even if this finish + * time is pushed to the system virtual time + * to reduce the consequent timestamp + * misalignment, the queue unjustly enjoys for + * many re-activations a lower finish time + * than all newly activated queues. + * + * The service needed by bfqq is measured + * quite precisely by bfqq->entity.service. + * Since bfqq does not enjoy device idling, + * bfqq->entity.service is equal to the number + * of sectors that the process associated with + * bfqq requested to read/write before waiting + * for request completions, or blocking for + * other reasons. + */ + budget = max_t(int, bfqq->entity.service, min_budget); + break; + default: + return; + } + } else if (!bfq_bfqq_sync(bfqq)) + /* + * Async queues get always the maximum possible + * budget, as for them we do not care about latency + * (in addition, their ability to dispatch is limited + * by the charging factor). + */ + budget = bfqd->bfq_max_budget; + + bfqq->max_budget = budget; + + if (bfqd->budgets_assigned >= bfq_stats_min_budgets && + !bfqd->bfq_user_max_budget) + bfqq->max_budget = min(bfqq->max_budget, bfqd->bfq_max_budget); + + /* + * If there is still backlog, then assign a new budget, making + * sure that it is large enough for the next request. Since + * the finish time of bfqq must be kept in sync with the + * budget, be sure to call __bfq_bfqq_expire() *after* this + * update. + * + * If there is no backlog, then no need to update the budget; + * it will be updated on the arrival of a new request. + */ + next_rq = bfqq->next_rq; + if (next_rq) { + BUG_ON(reason == BFQ_BFQQ_TOO_IDLE || + reason == BFQ_BFQQ_NO_MORE_REQUESTS); + bfqq->entity.budget = max_t(unsigned long, bfqq->max_budget, + bfq_serv_to_charge(next_rq, bfqq)); + BUG_ON(!bfq_bfqq_busy(bfqq)); + BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list)); + } + + bfq_log_bfqq(bfqd, bfqq, "head sect: %u, new budget %d", + next_rq ? blk_rq_sectors(next_rq) : 0, + bfqq->entity.budget); +} + +/* + * Return true if the process associated with bfqq is "slow". The slow + * flag is used, in addition to the budget timeout, to reduce the + * amount of service provided to seeky processes, and thus reduce + * their chances to lower the throughput. More details in the comments + * on the function bfq_bfqq_expire(). + * + * An important observation is in order: as discussed in the comments + * on the function bfq_update_peak_rate(), with devices with internal + * queues, it is hard if ever possible to know when and for how long + * an I/O request is processed by the device (apart from the trivial + * I/O pattern where a new request is dispatched only after the + * previous one has been completed). This makes it hard to evaluate + * the real rate at which the I/O requests of each bfq_queue are + * served. In fact, for an I/O scheduler like BFQ, serving a + * bfq_queue means just dispatching its requests during its service + * slot (i.e., until the budget of the queue is exhausted, or the + * queue remains idle, or, finally, a timeout fires). But, during the + * service slot of a bfq_queue, around 100 ms at most, the device may + * be even still processing requests of bfq_queues served in previous + * service slots. On the opposite end, the requests of the in-service + * bfq_queue may be completed after the service slot of the queue + * finishes. + * + * Anyway, unless more sophisticated solutions are used + * (where possible), the sum of the sizes of the requests dispatched + * during the service slot of a bfq_queue is probably the only + * approximation available for the service received by the bfq_queue + * during its service slot. And this sum is the quantity used in this + * function to evaluate the I/O speed of a process. + */ +static bool bfq_bfqq_is_slow(struct bfq_data *bfqd, struct bfq_queue *bfqq, + bool compensate, enum bfqq_expiration reason, + unsigned long *delta_ms) +{ + ktime_t delta_ktime; + u32 delta_usecs; + bool slow = BFQQ_SEEKY(bfqq); /* if delta too short, use seekyness */ + + if (!bfq_bfqq_sync(bfqq)) + return false; + + if (compensate) + delta_ktime = bfqd->last_idling_start; + else + delta_ktime = ktime_get(); + delta_ktime = ktime_sub(delta_ktime, bfqd->last_budget_start); + delta_usecs = ktime_to_us(delta_ktime); + + /* don't use too short time intervals */ + if (delta_usecs < 1000) { + if (blk_queue_nonrot(bfqd->queue)) + /* + * give same worst-case guarantees as idling + * for seeky + */ + *delta_ms = BFQ_MIN_TT / NSEC_PER_MSEC; + else /* charge at least one seek */ + *delta_ms = bfq_slice_idle / NSEC_PER_MSEC; + + bfq_log(bfqd, "bfq_bfqq_is_slow: too short %u", delta_usecs); + + return slow; + } + + *delta_ms = delta_usecs / USEC_PER_MSEC; + + /* + * Use only long (> 20ms) intervals to filter out excessive + * spikes in service rate estimation. + */ + if (delta_usecs > 20000) { + /* + * Caveat for rotational devices: processes doing I/O + * in the slower disk zones tend to be slow(er) even + * if not seeky. In this respect, the estimated peak + * rate is likely to be an average over the disk + * surface. Accordingly, to not be too harsh with + * unlucky processes, a process is deemed slow only if + * its rate has been lower than half of the estimated + * peak rate. + */ + slow = bfqq->entity.service < bfqd->bfq_max_budget / 2; + bfq_log(bfqd, "bfq_bfqq_is_slow: relative rate %d/%d", + bfqq->entity.service, bfqd->bfq_max_budget); + } + + bfq_log_bfqq(bfqd, bfqq, "bfq_bfqq_is_slow: slow %d", slow); + + return slow; +} + +/* + * To be deemed as soft real-time, an application must meet two + * requirements. First, the application must not require an average + * bandwidth higher than the approximate bandwidth required to playback or + * record a compressed high-definition video. + * The next function is invoked on the completion of the last request of a + * batch, to compute the next-start time instant, soft_rt_next_start, such + * that, if the next request of the application does not arrive before + * soft_rt_next_start, then the above requirement on the bandwidth is met. + * + * The second requirement is that the request pattern of the application is + * isochronous, i.e., that, after issuing a request or a batch of requests, + * the application stops issuing new requests until all its pending requests + * have been completed. After that, the application may issue a new batch, + * and so on. + * For this reason the next function is invoked to compute + * soft_rt_next_start only for applications that meet this requirement, + * whereas soft_rt_next_start is set to infinity for applications that do + * not. + * + * Unfortunately, even a greedy application may happen to behave in an + * isochronous way if the CPU load is high. In fact, the application may + * stop issuing requests while the CPUs are busy serving other processes, + * then restart, then stop again for a while, and so on. In addition, if + * the disk achieves a low enough throughput with the request pattern + * issued by the application (e.g., because the request pattern is random + * and/or the device is slow), then the application may meet the above + * bandwidth requirement too. To prevent such a greedy application to be + * deemed as soft real-time, a further rule is used in the computation of + * soft_rt_next_start: soft_rt_next_start must be higher than the current + * time plus the maximum time for which the arrival of a request is waited + * for when a sync queue becomes idle, namely bfqd->bfq_slice_idle. + * This filters out greedy applications, as the latter issue instead their + * next request as soon as possible after the last one has been completed + * (in contrast, when a batch of requests is completed, a soft real-time + * application spends some time processing data). + * + * Unfortunately, the last filter may easily generate false positives if + * only bfqd->bfq_slice_idle is used as a reference time interval and one + * or both the following cases occur: + * 1) HZ is so low that the duration of a jiffy is comparable to or higher + * than bfqd->bfq_slice_idle. This happens, e.g., on slow devices with + * HZ=100. + * 2) jiffies, instead of increasing at a constant rate, may stop increasing + * for a while, then suddenly 'jump' by several units to recover the lost + * increments. This seems to happen, e.g., inside virtual machines. + * To address this issue, we do not use as a reference time interval just + * bfqd->bfq_slice_idle, but bfqd->bfq_slice_idle plus a few jiffies. In + * particular we add the minimum number of jiffies for which the filter + * seems to be quite precise also in embedded systems and KVM/QEMU virtual + * machines. + */ +static unsigned long bfq_bfqq_softrt_next_start(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + bfq_log_bfqq(bfqd, bfqq, +"softrt_next_start: service_blkg %lu soft_rate %u sects/sec interval %u", + bfqq->service_from_backlogged, + bfqd->bfq_wr_max_softrt_rate, + jiffies_to_msecs(HZ * bfqq->service_from_backlogged / + bfqd->bfq_wr_max_softrt_rate)); + + return max(bfqq->last_idle_bklogged + + HZ * bfqq->service_from_backlogged / + bfqd->bfq_wr_max_softrt_rate, + jiffies + nsecs_to_jiffies(bfqq->bfqd->bfq_slice_idle) + 4); +} + +/** + * bfq_bfqq_expire - expire a queue. + * @bfqd: device owning the queue. + * @bfqq: the queue to expire. + * @compensate: if true, compensate for the time spent idling. + * @reason: the reason causing the expiration. + * + * If the process associated with bfqq does slow I/O (e.g., because it + * issues random requests), we charge bfqq with the time it has been + * in service instead of the service it has received (see + * bfq_bfqq_charge_time for details on how this goal is achieved). As + * a consequence, bfqq will typically get higher timestamps upon + * reactivation, and hence it will be rescheduled as if it had + * received more service than what it has actually received. In the + * end, bfqq receives less service in proportion to how slowly its + * associated process consumes its budgets (and hence how seriously it + * tends to lower the throughput). In addition, this time-charging + * strategy guarantees time fairness among slow processes. In + * contrast, if the process associated with bfqq is not slow, we + * charge bfqq exactly with the service it has received. + * + * Charging time to the first type of queues and the exact service to + * the other has the effect of using the WF2Q+ policy to schedule the + * former on a timeslice basis, without violating service domain + * guarantees among the latter. + */ +static void bfq_bfqq_expire(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + bool compensate, + enum bfqq_expiration reason) +{ + bool slow; + unsigned long delta = 0; + struct bfq_entity *entity = &bfqq->entity; + int ref; + + BUG_ON(bfqq != bfqd->in_service_queue); + + /* + * Check whether the process is slow (see bfq_bfqq_is_slow). + */ + slow = bfq_bfqq_is_slow(bfqd, bfqq, compensate, reason, &delta); + + /* + * Increase service_from_backlogged before next statement, + * because the possible next invocation of + * bfq_bfqq_charge_time would likely inflate + * entity->service. In contrast, service_from_backlogged must + * contain real service, to enable the soft real-time + * heuristic to correctly compute the bandwidth consumed by + * bfqq. + */ + bfqq->service_from_backlogged += entity->service; + + /* + * As above explained, charge slow (typically seeky) and + * timed-out queues with the time and not the service + * received, to favor sequential workloads. + * + * Processes doing I/O in the slower disk zones will tend to + * be slow(er) even if not seeky. Therefore, since the + * estimated peak rate is actually an average over the disk + * surface, these processes may timeout just for bad luck. To + * avoid punishing them, do not charge time to processes that + * succeeded in consuming at least 2/3 of their budget. This + * allows BFQ to preserve enough elasticity to still perform + * bandwidth, and not time, distribution with little unlucky + * or quasi-sequential processes. + */ + if (bfqq->wr_coeff == 1 && + (slow || + (reason == BFQ_BFQQ_BUDGET_TIMEOUT && + bfq_bfqq_budget_left(bfqq) >= entity->budget / 3))) + bfq_bfqq_charge_time(bfqd, bfqq, delta); + + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + + if (reason == BFQ_BFQQ_TOO_IDLE && + entity->service <= 2 * entity->budget / 10) + bfq_clear_bfqq_IO_bound(bfqq); + + if (bfqd->low_latency && bfqq->wr_coeff == 1) + bfqq->last_wr_start_finish = jiffies; + + if (bfqd->low_latency && bfqd->bfq_wr_max_softrt_rate > 0 && + RB_EMPTY_ROOT(&bfqq->sort_list)) { + /* + * If we get here, and there are no outstanding + * requests, then the request pattern is isochronous + * (see the comments on the function + * bfq_bfqq_softrt_next_start()). Thus we can compute + * soft_rt_next_start. If, instead, the queue still + * has outstanding requests, then we have to wait for + * the completion of all the outstanding requests to + * discover whether the request pattern is actually + * isochronous. + */ + BUG_ON(bfqd->busy_queues < 1); + if (bfqq->dispatched == 0) { + bfqq->soft_rt_next_start = + bfq_bfqq_softrt_next_start(bfqd, bfqq); + bfq_log_bfqq(bfqd, bfqq, "new soft_rt_next %lu", + bfqq->soft_rt_next_start); + } else { + /* + * The application is still waiting for the + * completion of one or more requests: + * prevent it from possibly being incorrectly + * deemed as soft real-time by setting its + * soft_rt_next_start to infinity. In fact, + * without this assignment, the application + * would be incorrectly deemed as soft + * real-time if: + * 1) it issued a new request before the + * completion of all its in-flight + * requests, and + * 2) at that time, its soft_rt_next_start + * happened to be in the past. + */ + bfqq->soft_rt_next_start = + bfq_greatest_from_now(); + /* + * Schedule an update of soft_rt_next_start to when + * the task may be discovered to be isochronous. + */ + bfq_mark_bfqq_softrt_update(bfqq); + } + } + + bfq_log_bfqq(bfqd, bfqq, + "expire (%d, slow %d, num_disp %d, short_ttime %d, weight %d)", + reason, slow, bfqq->dispatched, + bfq_bfqq_has_short_ttime(bfqq), entity->weight); + + /* + * Increase, decrease or leave budget unchanged according to + * reason. + */ + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + __bfq_bfqq_recalc_budget(bfqd, bfqq, reason); + BUG_ON(bfqq->next_rq == NULL && + bfqq->entity.budget < bfqq->entity.service); + ref = bfqq->ref; + __bfq_bfqq_expire(bfqd, bfqq); + + BUG_ON(ref > 1 && + !bfq_bfqq_busy(bfqq) && reason == BFQ_BFQQ_BUDGET_EXHAUSTED && + !bfq_class_idle(bfqq)); + + /* mark bfqq as waiting a request only if a bic still points to it */ + if (ref > 1 && !bfq_bfqq_busy(bfqq) && + reason != BFQ_BFQQ_BUDGET_TIMEOUT && + reason != BFQ_BFQQ_BUDGET_EXHAUSTED) + bfq_mark_bfqq_non_blocking_wait_rq(bfqq); +} + +/* + * Budget timeout is not implemented through a dedicated timer, but + * just checked on request arrivals and completions, as well as on + * idle timer expirations. + */ +static bool bfq_bfqq_budget_timeout(struct bfq_queue *bfqq) +{ + return time_is_before_eq_jiffies(bfqq->budget_timeout); +} + +/* + * If we expire a queue that is actively waiting (i.e., with the + * device idled) for the arrival of a new request, then we may incur + * the timestamp misalignment problem described in the body of the + * function __bfq_activate_entity. Hence we return true only if this + * condition does not hold, or if the queue is slow enough to deserve + * only to be kicked off for preserving a high throughput. + */ +static bool bfq_may_expire_for_budg_timeout(struct bfq_queue *bfqq) +{ + bfq_log_bfqq(bfqq->bfqd, bfqq, + "may_budget_timeout: wait_request %d left %d timeout %d", + bfq_bfqq_wait_request(bfqq), + bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3, + bfq_bfqq_budget_timeout(bfqq)); + + return (!bfq_bfqq_wait_request(bfqq) || + bfq_bfqq_budget_left(bfqq) >= bfqq->entity.budget / 3) + && + bfq_bfqq_budget_timeout(bfqq); +} + +/* + * For a queue that becomes empty, device idling is allowed only if + * this function returns true for that queue. As a consequence, since + * device idling plays a critical role for both throughput boosting + * and service guarantees, the return value of this function plays a + * critical role as well. + * + * In a nutshell, this function returns true only if idling is + * beneficial for throughput or, even if detrimental for throughput, + * idling is however necessary to preserve service guarantees (low + * latency, desired throughput distribution, ...). In particular, on + * NCQ-capable devices, this function tries to return false, so as to + * help keep the drives' internal queues full, whenever this helps the + * device boost the throughput without causing any service-guarantee + * issue. + * + * In more detail, the return value of this function is obtained by, + * first, computing a number of boolean variables that take into + * account throughput and service-guarantee issues, and, then, + * combining these variables in a logical expression. Most of the + * issues taken into account are not trivial. We discuss these issues + * while introducing the variables. + */ +static bool bfq_bfqq_may_idle(struct bfq_queue *bfqq) +{ + struct bfq_data *bfqd = bfqq->bfqd; + bool rot_without_queueing = + !blk_queue_nonrot(bfqd->queue) && !bfqd->hw_tag, + bfqq_sequential_and_IO_bound, + idling_boosts_thr, idling_boosts_thr_without_issues, + idling_needed_for_service_guarantees, + asymmetric_scenario; + + if (bfqd->strict_guarantees) + return true; + + /* + * Idling is performed only if slice_idle > 0. In addition, we + * do not idle if + * (a) bfqq is async + * (b) bfqq is in the idle io prio class: in this case we do + * not idle because we want to minimize the bandwidth that + * queues in this class can steal to higher-priority queues + */ + if (bfqd->bfq_slice_idle == 0 || !bfq_bfqq_sync(bfqq) || + bfq_class_idle(bfqq)) + return false; + + bfqq_sequential_and_IO_bound = !BFQQ_SEEKY(bfqq) && + bfq_bfqq_IO_bound(bfqq) && bfq_bfqq_has_short_ttime(bfqq); + /* + * The next variable takes into account the cases where idling + * boosts the throughput. + * + * The value of the variable is computed considering, first, that + * idling is virtually always beneficial for the throughput if: + * (a) the device is not NCQ-capable and rotational, or + * (b) regardless of the presence of NCQ, the device is rotational and + * the request pattern for bfqq is I/O-bound and sequential, or + * (c) regardless of whether it is rotational, the device is + * not NCQ-capable and the request pattern for bfqq is + * I/O-bound and sequential. + * + * Secondly, and in contrast to the above item (b), idling an + * NCQ-capable flash-based device would not boost the + * throughput even with sequential I/O; rather it would lower + * the throughput in proportion to how fast the device + * is. Accordingly, the next variable is true if any of the + * above conditions (a), (b) or (c) is true, and, in + * particular, happens to be false if bfqd is an NCQ-capable + * flash-based device. + */ + idling_boosts_thr = rot_without_queueing || + ((!blk_queue_nonrot(bfqd->queue) || !bfqd->hw_tag) && + bfqq_sequential_and_IO_bound); + + /* + * The value of the next variable, + * idling_boosts_thr_without_issues, is equal to that of + * idling_boosts_thr, unless a special case holds. In this + * special case, described below, idling may cause problems to + * weight-raised queues. + * + * When the request pool is saturated (e.g., in the presence + * of write hogs), if the processes associated with + * non-weight-raised queues ask for requests at a lower rate, + * then processes associated with weight-raised queues have a + * higher probability to get a request from the pool + * immediately (or at least soon) when they need one. Thus + * they have a higher probability to actually get a fraction + * of the device throughput proportional to their high + * weight. This is especially true with NCQ-capable drives, + * which enqueue several requests in advance, and further + * reorder internally-queued requests. + * + * For this reason, we force to false the value of + * idling_boosts_thr_without_issues if there are weight-raised + * busy queues. In this case, and if bfqq is not weight-raised, + * this guarantees that the device is not idled for bfqq (if, + * instead, bfqq is weight-raised, then idling will be + * guaranteed by another variable, see below). Combined with + * the timestamping rules of BFQ (see [1] for details), this + * behavior causes bfqq, and hence any sync non-weight-raised + * queue, to get a lower number of requests served, and thus + * to ask for a lower number of requests from the request + * pool, before the busy weight-raised queues get served + * again. This often mitigates starvation problems in the + * presence of heavy write workloads and NCQ, thereby + * guaranteeing a higher application and system responsiveness + * in these hostile scenarios. + */ + idling_boosts_thr_without_issues = idling_boosts_thr && + bfqd->wr_busy_queues == 0; + + /* + * There is then a case where idling must be performed not + * for throughput concerns, but to preserve service + * guarantees. + * + * To introduce this case, we can note that allowing the drive + * to enqueue more than one request at a time, and hence + * delegating de facto final scheduling decisions to the + * drive's internal scheduler, entails loss of control on the + * actual request service order. In particular, the critical + * situation is when requests from different processes happen + * to be present, at the same time, in the internal queue(s) + * of the drive. In such a situation, the drive, by deciding + * the service order of the internally-queued requests, does + * determine also the actual throughput distribution among + * these processes. But the drive typically has no notion or + * concern about per-process throughput distribution, and + * makes its decisions only on a per-request basis. Therefore, + * the service distribution enforced by the drive's internal + * scheduler is likely to coincide with the desired + * device-throughput distribution only in a completely + * symmetric scenario where: + * (i) each of these processes must get the same throughput as + * the others; + * (ii) all these processes have the same I/O pattern + * (either sequential or random). + * In fact, in such a scenario, the drive will tend to treat + * the requests of each of these processes in about the same + * way as the requests of the others, and thus to provide + * each of these processes with about the same throughput + * (which is exactly the desired throughput distribution). In + * contrast, in any asymmetric scenario, device idling is + * certainly needed to guarantee that bfqq receives its + * assigned fraction of the device throughput (see [1] for + * details). + * + * We address this issue by controlling, actually, only the + * symmetry sub-condition (i), i.e., provided that + * sub-condition (i) holds, idling is not performed, + * regardless of whether sub-condition (ii) holds. In other + * words, only if sub-condition (i) holds, then idling is + * allowed, and the device tends to be prevented from queueing + * many requests, possibly of several processes. The reason + * for not controlling also sub-condition (ii) is that we + * exploit preemption to preserve guarantees in case of + * symmetric scenarios, even if (ii) does not hold, as + * explained in the next two paragraphs. + * + * Even if a queue, say Q, is expired when it remains idle, Q + * can still preempt the new in-service queue if the next + * request of Q arrives soon (see the comments on + * bfq_bfqq_update_budg_for_activation). If all queues and + * groups have the same weight, this form of preemption, + * combined with the hole-recovery heuristic described in the + * comments on function bfq_bfqq_update_budg_for_activation, + * are enough to preserve a correct bandwidth distribution in + * the mid term, even without idling. In fact, even if not + * idling allows the internal queues of the device to contain + * many requests, and thus to reorder requests, we can rather + * safely assume that the internal scheduler still preserves a + * minimum of mid-term fairness. The motivation for using + * preemption instead of idling is that, by not idling, + * service guarantees are preserved without minimally + * sacrificing throughput. In other words, both a high + * throughput and its desired distribution are obtained. + * + * More precisely, this preemption-based, idleless approach + * provides fairness in terms of IOPS, and not sectors per + * second. This can be seen with a simple example. Suppose + * that there are two queues with the same weight, but that + * the first queue receives requests of 8 sectors, while the + * second queue receives requests of 1024 sectors. In + * addition, suppose that each of the two queues contains at + * most one request at a time, which implies that each queue + * always remains idle after it is served. Finally, after + * remaining idle, each queue receives very quickly a new + * request. It follows that the two queues are served + * alternatively, preempting each other if needed. This + * implies that, although both queues have the same weight, + * the queue with large requests receives a service that is + * 1024/8 times as high as the service received by the other + * queue. + * + * On the other hand, device idling is performed, and thus + * pure sector-domain guarantees are provided, for the + * following queues, which are likely to need stronger + * throughput guarantees: weight-raised queues, and queues + * with a higher weight than other queues. When such queues + * are active, sub-condition (i) is false, which triggers + * device idling. + * + * According to the above considerations, the next variable is + * true (only) if sub-condition (i) holds. To compute the + * value of this variable, we not only use the return value of + * the function bfq_symmetric_scenario(), but also check + * whether bfqq is being weight-raised, because + * bfq_symmetric_scenario() does not take into account also + * weight-raised queues (see comments on + * bfq_weights_tree_add()). + * + * As a side note, it is worth considering that the above + * device-idling countermeasures may however fail in the + * following unlucky scenario: if idling is (correctly) + * disabled in a time period during which all symmetry + * sub-conditions hold, and hence the device is allowed to + * enqueue many requests, but at some later point in time some + * sub-condition stops to hold, then it may become impossible + * to let requests be served in the desired order until all + * the requests already queued in the device have been served. + */ + asymmetric_scenario = bfqq->wr_coeff > 1 || + !bfq_symmetric_scenario(bfqd); + + /* + * Finally, there is a case where maximizing throughput is the + * best choice even if it may cause unfairness toward + * bfqq. Such a case is when bfqq became active in a burst of + * queue activations. Queues that became active during a large + * burst benefit only from throughput, as discussed in the + * comments on bfq_handle_burst. Thus, if bfqq became active + * in a burst and not idling the device maximizes throughput, + * then the device must no be idled, because not idling the + * device provides bfqq and all other queues in the burst with + * maximum benefit. Combining this and the above case, we can + * now establish when idling is actually needed to preserve + * service guarantees. + */ + idling_needed_for_service_guarantees = + asymmetric_scenario && !bfq_bfqq_in_large_burst(bfqq); + + /* + * We have now all the components we need to compute the + * return value of the function, which is true only if idling + * either boosts the throughput (without issues), or is + * necessary to preserve service guarantees. + */ + bfq_log_bfqq(bfqd, bfqq, "may_idle: sync %d idling_boosts_thr %d", + bfq_bfqq_sync(bfqq), idling_boosts_thr); + + bfq_log_bfqq(bfqd, bfqq, + "may_idle: wr_busy %d boosts %d IO-bound %d guar %d", + bfqd->wr_busy_queues, + idling_boosts_thr_without_issues, + bfq_bfqq_IO_bound(bfqq), + idling_needed_for_service_guarantees); + + return idling_boosts_thr_without_issues || + idling_needed_for_service_guarantees; +} + +/* + * If the in-service queue is empty but the function bfq_bfqq_may_idle + * returns true, then: + * 1) the queue must remain in service and cannot be expired, and + * 2) the device must be idled to wait for the possible arrival of a new + * request for the queue. + * See the comments on the function bfq_bfqq_may_idle for the reasons + * why performing device idling is the best choice to boost the throughput + * and preserve service guarantees when bfq_bfqq_may_idle itself + * returns true. + */ +static bool bfq_bfqq_must_idle(struct bfq_queue *bfqq) +{ + return RB_EMPTY_ROOT(&bfqq->sort_list) && bfq_bfqq_may_idle(bfqq); +} + +/* + * Select a queue for service. If we have a current queue in service, + * check whether to continue servicing it, or retrieve and set a new one. + */ +static struct bfq_queue *bfq_select_queue(struct bfq_data *bfqd) +{ + struct bfq_queue *bfqq; + struct request *next_rq; + enum bfqq_expiration reason = BFQ_BFQQ_BUDGET_TIMEOUT; + + bfqq = bfqd->in_service_queue; + if (!bfqq) + goto new_queue; + + bfq_log_bfqq(bfqd, bfqq, "select_queue: already in-service queue"); + + if (bfq_may_expire_for_budg_timeout(bfqq) && + !hrtimer_active(&bfqd->idle_slice_timer) && + !bfq_bfqq_must_idle(bfqq)) + goto expire; + +check_queue: + /* + * This loop is rarely executed more than once. Even when it + * happens, it is much more convenient to re-execute this loop + * than to return NULL and trigger a new dispatch to get a + * request served. + */ + next_rq = bfqq->next_rq; + /* + * If bfqq has requests queued and it has enough budget left to + * serve them, keep the queue, otherwise expire it. + */ + if (next_rq) { + BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list)); + + if (bfq_serv_to_charge(next_rq, bfqq) > + bfq_bfqq_budget_left(bfqq)) { + /* + * Expire the queue for budget exhaustion, + * which makes sure that the next budget is + * enough to serve the next request, even if + * it comes from the fifo expired path. + */ + reason = BFQ_BFQQ_BUDGET_EXHAUSTED; + goto expire; + } else { + /* + * The idle timer may be pending because we may + * not disable disk idling even when a new request + * arrives. + */ + if (bfq_bfqq_wait_request(bfqq)) { + BUG_ON(!hrtimer_active(&bfqd->idle_slice_timer)); + /* + * If we get here: 1) at least a new request + * has arrived but we have not disabled the + * timer because the request was too small, + * 2) then the block layer has unplugged + * the device, causing the dispatch to be + * invoked. + * + * Since the device is unplugged, now the + * requests are probably large enough to + * provide a reasonable throughput. + * So we disable idling. + */ + bfq_clear_bfqq_wait_request(bfqq); + hrtimer_try_to_cancel(&bfqd->idle_slice_timer); + bfqg_stats_update_idle_time(bfqq_group(bfqq)); + } + goto keep_queue; + } + } + + /* + * No requests pending. However, if the in-service queue is idling + * for a new request, or has requests waiting for a completion and + * may idle after their completion, then keep it anyway. + */ + if (hrtimer_active(&bfqd->idle_slice_timer) || + (bfqq->dispatched != 0 && bfq_bfqq_may_idle(bfqq))) { + bfqq = NULL; + goto keep_queue; + } + + reason = BFQ_BFQQ_NO_MORE_REQUESTS; +expire: + bfq_bfqq_expire(bfqd, bfqq, false, reason); +new_queue: + bfqq = bfq_set_in_service_queue(bfqd); + if (bfqq) { + bfq_log_bfqq(bfqd, bfqq, "select_queue: checking new queue"); + goto check_queue; + } +keep_queue: + if (bfqq) + bfq_log_bfqq(bfqd, bfqq, "select_queue: returned this queue"); + else + bfq_log(bfqd, "select_queue: no queue returned"); + + return bfqq; +} + +static void bfq_update_wr_data(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + struct bfq_entity *entity = &bfqq->entity; + + if (bfqq->wr_coeff > 1) { /* queue is being weight-raised */ + BUG_ON(bfqq->wr_cur_max_time == bfqd->bfq_wr_rt_max_time && + time_is_after_jiffies(bfqq->last_wr_start_finish)); + + bfq_log_bfqq(bfqd, bfqq, + "raising period dur %u/%u msec, old coeff %u, w %d(%d)", + jiffies_to_msecs(jiffies - bfqq->last_wr_start_finish), + jiffies_to_msecs(bfqq->wr_cur_max_time), + bfqq->wr_coeff, + bfqq->entity.weight, bfqq->entity.orig_weight); + + BUG_ON(bfqq != bfqd->in_service_queue && entity->weight != + entity->orig_weight * bfqq->wr_coeff); + if (entity->prio_changed) + bfq_log_bfqq(bfqd, bfqq, "WARN: pending prio change"); + + /* + * If the queue was activated in a burst, or too much + * time has elapsed from the beginning of this + * weight-raising period, then end weight raising. + */ + if (bfq_bfqq_in_large_burst(bfqq)) + bfq_bfqq_end_wr(bfqq); + else if (time_is_before_jiffies(bfqq->last_wr_start_finish + + bfqq->wr_cur_max_time)) { + if (bfqq->wr_cur_max_time != bfqd->bfq_wr_rt_max_time || + time_is_before_jiffies(bfqq->wr_start_at_switch_to_srt + + bfq_wr_duration(bfqd))) + bfq_bfqq_end_wr(bfqq); + else { + switch_back_to_interactive_wr(bfqq, bfqd); + BUG_ON(time_is_after_jiffies( + bfqq->last_wr_start_finish)); + bfqq->entity.prio_changed = 1; + bfq_log_bfqq(bfqd, bfqq, + "back to interactive wr"); + } + } + } + /* + * To improve latency (for this or other queues), immediately + * update weight both if it must be raised and if it must be + * lowered. Since, entity may be on some active tree here, and + * might have a pending change of its ioprio class, invoke + * next function with the last parameter unset (see the + * comments on the function). + */ + if ((entity->weight > entity->orig_weight) != (bfqq->wr_coeff > 1)) + __bfq_entity_update_weight_prio(bfq_entity_service_tree(entity), + entity, false); +} + +/* + * Dispatch one request from bfqq, moving it to the request queue + * dispatch list. + */ +static int bfq_dispatch_request(struct bfq_data *bfqd, + struct bfq_queue *bfqq) +{ + int dispatched = 0; + struct request *rq = bfqq->next_rq; + unsigned long service_to_charge; + + BUG_ON(RB_EMPTY_ROOT(&bfqq->sort_list)); + BUG_ON(!rq); + service_to_charge = bfq_serv_to_charge(rq, bfqq); + + BUG_ON(service_to_charge > bfq_bfqq_budget_left(bfqq)); + + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + + bfq_bfqq_served(bfqq, service_to_charge); + + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + + bfq_dispatch_insert(bfqd->queue, rq); + + /* + * If weight raising has to terminate for bfqq, then next + * function causes an immediate update of bfqq's weight, + * without waiting for next activation. As a consequence, on + * expiration, bfqq will be timestamped as if has never been + * weight-raised during this service slot, even if it has + * received part or even most of the service as a + * weight-raised queue. This inflates bfqq's timestamps, which + * is beneficial, as bfqq is then more willing to leave the + * device immediately to possible other weight-raised queues. + */ + bfq_update_wr_data(bfqd, bfqq); + + bfq_log_bfqq(bfqd, bfqq, + "dispatched %u sec req (%llu), budg left %d", + blk_rq_sectors(rq), + (unsigned long long) blk_rq_pos(rq), + bfq_bfqq_budget_left(bfqq)); + + dispatched++; + + if (!bfqd->in_service_bic) { + atomic_long_inc(&RQ_BIC(rq)->icq.ioc->refcount); + bfqd->in_service_bic = RQ_BIC(rq); + BUG_ON(!bfqd->in_service_bic); + } + + if (bfqd->busy_queues > 1 && bfq_class_idle(bfqq)) + goto expire; + + return dispatched; + +expire: + bfq_bfqq_expire(bfqd, bfqq, false, BFQ_BFQQ_BUDGET_EXHAUSTED); + return dispatched; +} + +static int __bfq_forced_dispatch_bfqq(struct bfq_queue *bfqq) +{ + int dispatched = 0; + + while (bfqq->next_rq) { + bfq_dispatch_insert(bfqq->bfqd->queue, bfqq->next_rq); + dispatched++; + } + + BUG_ON(!list_empty(&bfqq->fifo)); + return dispatched; +} + +/* + * Drain our current requests. + * Used for barriers and when switching io schedulers on-the-fly. + */ +static int bfq_forced_dispatch(struct bfq_data *bfqd) +{ + struct bfq_queue *bfqq, *n; + struct bfq_service_tree *st; + int dispatched = 0; + + bfqq = bfqd->in_service_queue; + if (bfqq) + __bfq_bfqq_expire(bfqd, bfqq); + + /* + * Loop through classes, and be careful to leave the scheduler + * in a consistent state, as feedback mechanisms and vtime + * updates cannot be disabled during the process. + */ + list_for_each_entry_safe(bfqq, n, &bfqd->active_list, bfqq_list) { + st = bfq_entity_service_tree(&bfqq->entity); + + dispatched += __bfq_forced_dispatch_bfqq(bfqq); + + bfqq->max_budget = bfq_max_budget(bfqd); + bfq_forget_idle(st); + } + + BUG_ON(bfqd->busy_queues != 0); + + return dispatched; +} + +static int bfq_dispatch_requests(struct request_queue *q, int force) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + struct bfq_queue *bfqq; + + bfq_log(bfqd, "dispatch requests: %d busy queues", bfqd->busy_queues); + + if (bfqd->busy_queues == 0) + return 0; + + if (unlikely(force)) + return bfq_forced_dispatch(bfqd); + + /* + * Force device to serve one request at a time if + * strict_guarantees is true. Forcing this service scheme is + * currently the ONLY way to guarantee that the request + * service order enforced by the scheduler is respected by a + * queueing device. Otherwise the device is free even to make + * some unlucky request wait for as long as the device + * wishes. + * + * Of course, serving one request at at time may cause loss of + * throughput. + */ + if (bfqd->strict_guarantees && bfqd->rq_in_driver > 0) + return 0; + + bfqq = bfq_select_queue(bfqd); + if (!bfqq) + return 0; + + BUG_ON(bfqq->entity.budget < bfqq->entity.service); + + BUG_ON(bfq_bfqq_wait_request(bfqq)); + + if (!bfq_dispatch_request(bfqd, bfqq)) + return 0; + + bfq_log_bfqq(bfqd, bfqq, "dispatched %s request", + bfq_bfqq_sync(bfqq) ? "sync" : "async"); + + BUG_ON(bfqq->next_rq == NULL && + bfqq->entity.budget < bfqq->entity.service); + return 1; +} + +/* + * Task holds one reference to the queue, dropped when task exits. Each rq + * in-flight on this queue also holds a reference, dropped when rq is freed. + * + * Queue lock must be held here. Recall not to use bfqq after calling + * this function on it. + */ +static void bfq_put_queue(struct bfq_queue *bfqq) +{ +#ifdef BFQ_GROUP_IOSCHED_ENABLED + struct bfq_group *bfqg = bfqq_group(bfqq); +#endif + + BUG_ON(bfqq->ref <= 0); + + bfq_log_bfqq(bfqq->bfqd, bfqq, "put_queue: %p %d", bfqq, bfqq->ref); + bfqq->ref--; + if (bfqq->ref) + return; + + BUG_ON(rb_first(&bfqq->sort_list)); + BUG_ON(bfqq->allocated[READ] + bfqq->allocated[WRITE] != 0); + BUG_ON(bfqq->entity.tree); + BUG_ON(bfq_bfqq_busy(bfqq)); + + if (!hlist_unhashed(&bfqq->burst_list_node)) { + hlist_del_init(&bfqq->burst_list_node); + /* + * Decrement also burst size after the removal, if the + * process associated with bfqq is exiting, and thus + * does not contribute to the burst any longer. This + * decrement helps filter out false positives of large + * bursts, when some short-lived process (often due to + * the execution of commands by some service) happens + * to start and exit while a complex application is + * starting, and thus spawning several processes that + * do I/O (and that *must not* be treated as a large + * burst, see comments on bfq_handle_burst). + * + * In particular, the decrement is performed only if: + * 1) bfqq is not a merged queue, because, if it is, + * then this free of bfqq is not triggered by the exit + * of the process bfqq is associated with, but exactly + * by the fact that bfqq has just been merged. + * 2) burst_size is greater than 0, to handle + * unbalanced decrements. Unbalanced decrements may + * happen in te following case: bfqq is inserted into + * the current burst list--without incrementing + * bust_size--because of a split, but the current + * burst list is not the burst list bfqq belonged to + * (see comments on the case of a split in + * bfq_set_request). + */ + if (bfqq->bic && bfqq->bfqd->burst_size > 0) + bfqq->bfqd->burst_size--; + } + + bfq_log_bfqq(bfqq->bfqd, bfqq, "put_queue: %p freed", bfqq); + + kmem_cache_free(bfq_pool, bfqq); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + bfqg_put(bfqg); +#endif +} + +static void bfq_put_cooperator(struct bfq_queue *bfqq) +{ + struct bfq_queue *__bfqq, *next; + + /* + * If this queue was scheduled to merge with another queue, be + * sure to drop the reference taken on that queue (and others in + * the merge chain). See bfq_setup_merge and bfq_merge_bfqqs. + */ + __bfqq = bfqq->new_bfqq; + while (__bfqq) { + if (__bfqq == bfqq) + break; + next = __bfqq->new_bfqq; + bfq_put_queue(__bfqq); + __bfqq = next; + } +} + +static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq) +{ + if (bfqq == bfqd->in_service_queue) { + __bfq_bfqq_expire(bfqd, bfqq); + bfq_schedule_dispatch(bfqd); + } + + bfq_log_bfqq(bfqd, bfqq, "exit_bfqq: %p, %d", bfqq, bfqq->ref); + + bfq_put_cooperator(bfqq); + + bfq_put_queue(bfqq); /* release process reference */ +} + +static void bfq_init_icq(struct io_cq *icq) +{ + icq_to_bic(icq)->ttime.last_end_request = ktime_get_ns() - (1ULL<<32); +} + +static void bfq_exit_icq(struct io_cq *icq) +{ + struct bfq_io_cq *bic = icq_to_bic(icq); + struct bfq_data *bfqd = bic_to_bfqd(bic); + + if (bic_to_bfqq(bic, false)) { + bfq_exit_bfqq(bfqd, bic_to_bfqq(bic, false)); + bic_set_bfqq(bic, NULL, false); + } + + if (bic_to_bfqq(bic, true)) { + /* + * If the bic is using a shared queue, put the reference + * taken on the io_context when the bic started using a + * shared bfq_queue. + */ + if (bfq_bfqq_coop(bic_to_bfqq(bic, true))) + put_io_context(icq->ioc); + bfq_exit_bfqq(bfqd, bic_to_bfqq(bic, true)); + bic_set_bfqq(bic, NULL, true); + } +} + +/* + * Update the entity prio values; note that the new values will not + * be used until the next (re)activation. + */ +static void bfq_set_next_ioprio_data(struct bfq_queue *bfqq, + struct bfq_io_cq *bic) +{ + struct task_struct *tsk = current; + int ioprio_class; + + ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio); + switch (ioprio_class) { + default: + dev_err(bfqq->bfqd->queue->backing_dev_info->dev, + "bfq: bad prio class %d\n", ioprio_class); + case IOPRIO_CLASS_NONE: + /* + * No prio set, inherit CPU scheduling settings. + */ + bfqq->new_ioprio = task_nice_ioprio(tsk); + bfqq->new_ioprio_class = task_nice_ioclass(tsk); + break; + case IOPRIO_CLASS_RT: + bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio); + bfqq->new_ioprio_class = IOPRIO_CLASS_RT; + break; + case IOPRIO_CLASS_BE: + bfqq->new_ioprio = IOPRIO_PRIO_DATA(bic->ioprio); + bfqq->new_ioprio_class = IOPRIO_CLASS_BE; + break; + case IOPRIO_CLASS_IDLE: + bfqq->new_ioprio_class = IOPRIO_CLASS_IDLE; + bfqq->new_ioprio = 7; + break; + } + + if (bfqq->new_ioprio >= IOPRIO_BE_NR) { + pr_crit("bfq_set_next_ioprio_data: new_ioprio %d\n", + bfqq->new_ioprio); + BUG(); + } + + bfqq->entity.new_weight = bfq_ioprio_to_weight(bfqq->new_ioprio); + bfqq->entity.prio_changed = 1; + bfq_log_bfqq(bfqq->bfqd, bfqq, + "set_next_ioprio_data: bic_class %d prio %d class %d", + ioprio_class, bfqq->new_ioprio, bfqq->new_ioprio_class); +} + +static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio) +{ + struct bfq_data *bfqd = bic_to_bfqd(bic); + struct bfq_queue *bfqq; + unsigned long uninitialized_var(flags); + int ioprio = bic->icq.ioc->ioprio; + + /* + * This condition may trigger on a newly created bic, be sure to + * drop the lock before returning. + */ + if (unlikely(!bfqd) || likely(bic->ioprio == ioprio)) + return; + + bic->ioprio = ioprio; + + bfqq = bic_to_bfqq(bic, false); + if (bfqq) { + /* release process reference on this queue */ + bfq_put_queue(bfqq); + bfqq = bfq_get_queue(bfqd, bio, BLK_RW_ASYNC, bic); + bic_set_bfqq(bic, bfqq, false); + bfq_log_bfqq(bfqd, bfqq, + "check_ioprio_change: bfqq %p %d", + bfqq, bfqq->ref); + } + + bfqq = bic_to_bfqq(bic, true); + if (bfqq) + bfq_set_next_ioprio_data(bfqq, bic); +} + +static void bfq_init_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq, + struct bfq_io_cq *bic, pid_t pid, int is_sync) +{ + RB_CLEAR_NODE(&bfqq->entity.rb_node); + INIT_LIST_HEAD(&bfqq->fifo); + INIT_HLIST_NODE(&bfqq->burst_list_node); + BUG_ON(!hlist_unhashed(&bfqq->burst_list_node)); + + bfqq->ref = 0; + bfqq->bfqd = bfqd; + + if (bic) + bfq_set_next_ioprio_data(bfqq, bic); + + if (is_sync) { + /* + * No need to mark as has_short_ttime if in + * idle_class, because no device idling is performed + * for queues in idle class + */ + if (!bfq_class_idle(bfqq)) + /* tentatively mark as has_short_ttime */ + bfq_mark_bfqq_has_short_ttime(bfqq); + bfq_mark_bfqq_sync(bfqq); + bfq_mark_bfqq_just_created(bfqq); + } else + bfq_clear_bfqq_sync(bfqq); + bfq_mark_bfqq_IO_bound(bfqq); + + /* Tentative initial value to trade off between thr and lat */ + bfqq->max_budget = (2 * bfq_max_budget(bfqd)) / 3; + bfqq->pid = pid; + + bfqq->wr_coeff = 1; + bfqq->last_wr_start_finish = jiffies; + bfqq->wr_start_at_switch_to_srt = bfq_smallest_from_now(); + bfqq->budget_timeout = bfq_smallest_from_now(); + bfqq->split_time = bfq_smallest_from_now(); + + /* + * Set to the value for which bfqq will not be deemed as + * soft rt when it becomes backlogged. + */ + bfqq->soft_rt_next_start = bfq_greatest_from_now(); + + /* first request is almost certainly seeky */ + bfqq->seek_history = 1; +} + +static struct bfq_queue **bfq_async_queue_prio(struct bfq_data *bfqd, + struct bfq_group *bfqg, + int ioprio_class, int ioprio) +{ + switch (ioprio_class) { + case IOPRIO_CLASS_RT: + return &bfqg->async_bfqq[0][ioprio]; + case IOPRIO_CLASS_NONE: + ioprio = IOPRIO_NORM; + /* fall through */ + case IOPRIO_CLASS_BE: + return &bfqg->async_bfqq[1][ioprio]; + case IOPRIO_CLASS_IDLE: + return &bfqg->async_idle_bfqq; + default: + BUG(); + } +} + +static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd, + struct bio *bio, bool is_sync, + struct bfq_io_cq *bic) +{ + const int ioprio = IOPRIO_PRIO_DATA(bic->ioprio); + const int ioprio_class = IOPRIO_PRIO_CLASS(bic->ioprio); + struct bfq_queue **async_bfqq = NULL; + struct bfq_queue *bfqq; + struct bfq_group *bfqg; + + rcu_read_lock(); + + bfqg = bfq_find_set_group(bfqd, bio_blkcg(bio)); + if (!bfqg) { + bfqq = &bfqd->oom_bfqq; + goto out; + } + + if (!is_sync) { + async_bfqq = bfq_async_queue_prio(bfqd, bfqg, ioprio_class, + ioprio); + bfqq = *async_bfqq; + if (bfqq) + goto out; + } + + bfqq = kmem_cache_alloc_node(bfq_pool, + GFP_NOWAIT | __GFP_ZERO | __GFP_NOWARN, + bfqd->queue->node); + + if (bfqq) { + bfq_init_bfqq(bfqd, bfqq, bic, current->pid, + is_sync); + bfq_init_entity(&bfqq->entity, bfqg); + bfq_log_bfqq(bfqd, bfqq, "allocated"); + } else { + bfqq = &bfqd->oom_bfqq; + bfq_log_bfqq(bfqd, bfqq, "using oom bfqq"); + goto out; + } + + /* + * Pin the queue now that it's allocated, scheduler exit will + * prune it. + */ + if (async_bfqq) { + bfqq->ref++; /* + * Extra group reference, w.r.t. sync + * queue. This extra reference is removed + * only if bfqq->bfqg disappears, to + * guarantee that this queue is not freed + * until its group goes away. + */ + bfq_log_bfqq(bfqd, bfqq, "get_queue, bfqq not in async: %p, %d", + bfqq, bfqq->ref); + *async_bfqq = bfqq; + } + +out: + bfqq->ref++; /* get a process reference to this queue */ + bfq_log_bfqq(bfqd, bfqq, "get_queue, at end: %p, %d", bfqq, bfqq->ref); + rcu_read_unlock(); + return bfqq; +} + +static void bfq_update_io_thinktime(struct bfq_data *bfqd, + struct bfq_io_cq *bic) +{ + struct bfq_ttime *ttime = &bic->ttime; + u64 elapsed = ktime_get_ns() - bic->ttime.last_end_request; + + elapsed = min_t(u64, elapsed, 2 * bfqd->bfq_slice_idle); + + ttime->ttime_samples = (7*bic->ttime.ttime_samples + 256) / 8; + ttime->ttime_total = div_u64(7*ttime->ttime_total + 256*elapsed, 8); + ttime->ttime_mean = div64_ul(ttime->ttime_total + 128, + ttime->ttime_samples); +} + +static void +bfq_update_io_seektime(struct bfq_data *bfqd, struct bfq_queue *bfqq, + struct request *rq) +{ + bfqq->seek_history <<= 1; + bfqq->seek_history |= + get_sdist(bfqq->last_request_pos, rq) > BFQQ_SEEK_THR && + (!blk_queue_nonrot(bfqd->queue) || + blk_rq_sectors(rq) < BFQQ_SECT_THR_NONROT); +} + +static void bfq_update_has_short_ttime(struct bfq_data *bfqd, + struct bfq_queue *bfqq, + struct bfq_io_cq *bic) +{ + bool has_short_ttime = true; + + /* + * No need to update has_short_ttime if bfqq is async or in + * idle io prio class, or if bfq_slice_idle is zero, because + * no device idling is performed for bfqq in this case. + */ + if (!bfq_bfqq_sync(bfqq) || bfq_class_idle(bfqq) || + bfqd->bfq_slice_idle == 0) + return; + + /* Idle window just restored, statistics are meaningless. */ + if (time_is_after_eq_jiffies(bfqq->split_time + + bfqd->bfq_wr_min_idle_time)) + return; + + /* Think time is infinite if no process is linked to + * bfqq. Otherwise check average think time to + * decide whether to mark as has_short_ttime + */ + if (atomic_read(&bic->icq.ioc->active_ref) == 0 || + (bfq_sample_valid(bic->ttime.ttime_samples) && + bic->ttime.ttime_mean > bfqd->bfq_slice_idle)) + has_short_ttime = false; + + bfq_log_bfqq(bfqd, bfqq, "update_has_short_ttime: has_short_ttime %d", + has_short_ttime); + + if (has_short_ttime) + bfq_mark_bfqq_has_short_ttime(bfqq); + else + bfq_clear_bfqq_has_short_ttime(bfqq); +} + +/* + * Called when a new fs request (rq) is added to bfqq. Check if there's + * something we should do about it. + */ +static void bfq_rq_enqueued(struct bfq_data *bfqd, struct bfq_queue *bfqq, + struct request *rq) +{ + struct bfq_io_cq *bic = RQ_BIC(rq); + + if (rq->cmd_flags & REQ_META) + bfqq->meta_pending++; + + bfq_update_io_thinktime(bfqd, bic); + bfq_update_has_short_ttime(bfqd, bfqq, bic); + bfq_update_io_seektime(bfqd, bfqq, rq); + + bfq_log_bfqq(bfqd, bfqq, + "rq_enqueued: has_short_ttime=%d (seeky %d)", + bfq_bfqq_has_short_ttime(bfqq), BFQQ_SEEKY(bfqq)); + + bfqq->last_request_pos = blk_rq_pos(rq) + blk_rq_sectors(rq); + + if (bfqq == bfqd->in_service_queue && bfq_bfqq_wait_request(bfqq)) { + bool small_req = bfqq->queued[rq_is_sync(rq)] == 1 && + blk_rq_sectors(rq) < 32; + bool budget_timeout = bfq_bfqq_budget_timeout(bfqq); + + /* + * There is just this request queued: if the request + * is small and the queue is not to be expired, then + * just exit. + * + * In this way, if the device is being idled to wait + * for a new request from the in-service queue, we + * avoid unplugging the device and committing the + * device to serve just a small request. On the + * contrary, we wait for the block layer to decide + * when to unplug the device: hopefully, new requests + * will be merged to this one quickly, then the device + * will be unplugged and larger requests will be + * dispatched. + */ + if (small_req && !budget_timeout) + return; + + /* + * A large enough request arrived, or the queue is to + * be expired: in both cases disk idling is to be + * stopped, so clear wait_request flag and reset + * timer. + */ + bfq_clear_bfqq_wait_request(bfqq); + hrtimer_try_to_cancel(&bfqd->idle_slice_timer); + bfqg_stats_update_idle_time(bfqq_group(bfqq)); + + /* + * The queue is not empty, because a new request just + * arrived. Hence we can safely expire the queue, in + * case of budget timeout, without risking that the + * timestamps of the queue are not updated correctly. + * See [1] for more details. + */ + if (budget_timeout) + bfq_bfqq_expire(bfqd, bfqq, false, + BFQ_BFQQ_BUDGET_TIMEOUT); + + /* + * Let the request rip immediately, or let a new queue be + * selected if bfqq has just been expired. + */ + __blk_run_queue(bfqd->queue); + } +} + +static void bfq_insert_request(struct request_queue *q, struct request *rq) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + struct bfq_queue *bfqq = RQ_BFQQ(rq), *new_bfqq; + + assert_spin_locked(bfqd->queue->queue_lock); + + /* + * An unplug may trigger a requeue of a request from the device + * driver: make sure we are in process context while trying to + * merge two bfq_queues. + */ + if (!in_interrupt()) { + new_bfqq = bfq_setup_cooperator(bfqd, bfqq, rq, true); + if (new_bfqq) { + if (bic_to_bfqq(RQ_BIC(rq), 1) != bfqq) + new_bfqq = bic_to_bfqq(RQ_BIC(rq), 1); + /* + * Release the request's reference to the old bfqq + * and make sure one is taken to the shared queue. + */ + new_bfqq->allocated[rq_data_dir(rq)]++; + bfqq->allocated[rq_data_dir(rq)]--; + new_bfqq->ref++; + if (bic_to_bfqq(RQ_BIC(rq), 1) == bfqq) + bfq_merge_bfqqs(bfqd, RQ_BIC(rq), + bfqq, new_bfqq); + + bfq_clear_bfqq_just_created(bfqq); + /* + * rq is about to be enqueued into new_bfqq, + * release rq reference on bfqq + */ + bfq_put_queue(bfqq); + rq->elv.priv[1] = new_bfqq; + bfqq = new_bfqq; + } + } + + bfq_add_request(rq); + + rq->fifo_time = ktime_get_ns() + bfqd->bfq_fifo_expire[rq_is_sync(rq)]; + list_add_tail(&rq->queuelist, &bfqq->fifo); + + bfq_rq_enqueued(bfqd, bfqq, rq); +} + +static void bfq_update_hw_tag(struct bfq_data *bfqd) +{ + bfqd->max_rq_in_driver = max_t(int, bfqd->max_rq_in_driver, + bfqd->rq_in_driver); + + if (bfqd->hw_tag == 1) + return; + + /* + * This sample is valid if the number of outstanding requests + * is large enough to allow a queueing behavior. Note that the + * sum is not exact, as it's not taking into account deactivated + * requests. + */ + if (bfqd->rq_in_driver + bfqd->queued < BFQ_HW_QUEUE_THRESHOLD) + return; + + if (bfqd->hw_tag_samples++ < BFQ_HW_QUEUE_SAMPLES) + return; + + bfqd->hw_tag = bfqd->max_rq_in_driver > BFQ_HW_QUEUE_THRESHOLD; + bfqd->max_rq_in_driver = 0; + bfqd->hw_tag_samples = 0; +} + +static void bfq_completed_request(struct request_queue *q, struct request *rq) +{ + struct bfq_queue *bfqq = RQ_BFQQ(rq); + struct bfq_data *bfqd = bfqq->bfqd; + u64 now_ns; + u32 delta_us; + + bfq_log_bfqq(bfqd, bfqq, "completed one req with %u sects left", + blk_rq_sectors(rq)); + + assert_spin_locked(bfqd->queue->queue_lock); + bfq_update_hw_tag(bfqd); + + BUG_ON(!bfqd->rq_in_driver); + BUG_ON(!bfqq->dispatched); + bfqd->rq_in_driver--; + bfqq->dispatched--; + bfqg_stats_update_completion(bfqq_group(bfqq), + rq_start_time_ns(rq), + rq_io_start_time_ns(rq), + rq->cmd_flags); + + if (!bfqq->dispatched && !bfq_bfqq_busy(bfqq)) { + BUG_ON(!RB_EMPTY_ROOT(&bfqq->sort_list)); + /* + * Set budget_timeout (which we overload to store the + * time at which the queue remains with no backlog and + * no outstanding request; used by the weight-raising + * mechanism). + */ + bfqq->budget_timeout = jiffies; + + bfq_weights_tree_remove(bfqd, &bfqq->entity, + &bfqd->queue_weights_tree); + } + + now_ns = ktime_get_ns(); + + RQ_BIC(rq)->ttime.last_end_request = now_ns; + + /* + * Using us instead of ns, to get a reasonable precision in + * computing rate in next check. + */ + delta_us = div_u64(now_ns - bfqd->last_completion, NSEC_PER_USEC); + + bfq_log(bfqd, "rq_completed: delta %uus/%luus max_size %u rate %llu/%llu", + delta_us, BFQ_MIN_TT/NSEC_PER_USEC, bfqd->last_rq_max_size, + (USEC_PER_SEC* + (u64)((bfqd->last_rq_max_size<>BFQ_RATE_SHIFT, + (USEC_PER_SEC*(u64)(1UL<<(BFQ_RATE_SHIFT-10)))>>BFQ_RATE_SHIFT); + + /* + * If the request took rather long to complete, and, according + * to the maximum request size recorded, this completion latency + * implies that the request was certainly served at a very low + * rate (less than 1M sectors/sec), then the whole observation + * interval that lasts up to this time instant cannot be a + * valid time interval for computing a new peak rate. Invoke + * bfq_update_rate_reset to have the following three steps + * taken: + * - close the observation interval at the last (previous) + * request dispatch or completion + * - compute rate, if possible, for that observation interval + * - reset to zero samples, which will trigger a proper + * re-initialization of the observation interval on next + * dispatch + */ + if (delta_us > BFQ_MIN_TT/NSEC_PER_USEC && + (bfqd->last_rq_max_size<last_completion = now_ns; + + /* + * If we are waiting to discover whether the request pattern + * of the task associated with the queue is actually + * isochronous, and both requisites for this condition to hold + * are now satisfied, then compute soft_rt_next_start (see the + * comments on the function bfq_bfqq_softrt_next_start()). We + * schedule this delayed check when bfqq expires, if it still + * has in-flight requests. + */ + if (bfq_bfqq_softrt_update(bfqq) && bfqq->dispatched == 0 && + RB_EMPTY_ROOT(&bfqq->sort_list)) + bfqq->soft_rt_next_start = + bfq_bfqq_softrt_next_start(bfqd, bfqq); + + /* + * If this is the in-service queue, check if it needs to be expired, + * or if we want to idle in case it has no pending requests. + */ + if (bfqd->in_service_queue == bfqq) { + if (bfqq->dispatched == 0 && bfq_bfqq_must_idle(bfqq)) { + bfq_arm_slice_timer(bfqd); + goto out; + } else if (bfq_may_expire_for_budg_timeout(bfqq)) + bfq_bfqq_expire(bfqd, bfqq, false, + BFQ_BFQQ_BUDGET_TIMEOUT); + else if (RB_EMPTY_ROOT(&bfqq->sort_list) && + (bfqq->dispatched == 0 || + !bfq_bfqq_may_idle(bfqq))) + bfq_bfqq_expire(bfqd, bfqq, false, + BFQ_BFQQ_NO_MORE_REQUESTS); + } + + if (!bfqd->rq_in_driver) + bfq_schedule_dispatch(bfqd); + +out: + return; +} + +static int __bfq_may_queue(struct bfq_queue *bfqq) +{ + if (bfq_bfqq_wait_request(bfqq) && bfq_bfqq_must_alloc(bfqq)) { + bfq_clear_bfqq_must_alloc(bfqq); + return ELV_MQUEUE_MUST; + } + + return ELV_MQUEUE_MAY; +} + +static int bfq_may_queue(struct request_queue *q, unsigned int op) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + struct task_struct *tsk = current; + struct bfq_io_cq *bic; + struct bfq_queue *bfqq; + + /* + * Don't force setup of a queue from here, as a call to may_queue + * does not necessarily imply that a request actually will be + * queued. So just lookup a possibly existing queue, or return + * 'may queue' if that fails. + */ + bic = bfq_bic_lookup(bfqd, tsk->io_context); + if (!bic) + return ELV_MQUEUE_MAY; + + bfqq = bic_to_bfqq(bic, op_is_sync(op)); + if (bfqq) + return __bfq_may_queue(bfqq); + + return ELV_MQUEUE_MAY; +} + +/* + * Queue lock held here. + */ +static void bfq_put_request(struct request *rq) +{ + struct bfq_queue *bfqq = RQ_BFQQ(rq); + + if (bfqq) { + const int rw = rq_data_dir(rq); + + BUG_ON(!bfqq->allocated[rw]); + bfqq->allocated[rw]--; + + rq->elv.priv[0] = NULL; + rq->elv.priv[1] = NULL; + + bfq_log_bfqq(bfqq->bfqd, bfqq, "put_request %p, %d", + bfqq, bfqq->ref); + bfq_put_queue(bfqq); + } +} + +/* + * Returns NULL if a new bfqq should be allocated, or the old bfqq if this + * was the last process referring to that bfqq. + */ +static struct bfq_queue * +bfq_split_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq) +{ + bfq_log_bfqq(bfqq->bfqd, bfqq, "splitting queue"); + + put_io_context(bic->icq.ioc); + + if (bfqq_process_refs(bfqq) == 1) { + bfqq->pid = current->pid; + bfq_clear_bfqq_coop(bfqq); + bfq_clear_bfqq_split_coop(bfqq); + return bfqq; + } + + bic_set_bfqq(bic, NULL, 1); + + bfq_put_cooperator(bfqq); + + bfq_put_queue(bfqq); + return NULL; +} + +/* + * Allocate bfq data structures associated with this request. + */ +static int bfq_set_request(struct request_queue *q, struct request *rq, + struct bio *bio, gfp_t gfp_mask) +{ + struct bfq_data *bfqd = q->elevator->elevator_data; + struct bfq_io_cq *bic = icq_to_bic(rq->elv.icq); + const int rw = rq_data_dir(rq); + const int is_sync = rq_is_sync(rq); + struct bfq_queue *bfqq; + unsigned long flags; + bool bfqq_already_existing = false, split = false; + + spin_lock_irqsave(q->queue_lock, flags); + + if (!bic) + goto queue_fail; + + bfq_check_ioprio_change(bic, bio); + + bfq_bic_update_cgroup(bic, bio); + +new_queue: + bfqq = bic_to_bfqq(bic, is_sync); + if (!bfqq || bfqq == &bfqd->oom_bfqq) { + if (bfqq) + bfq_put_queue(bfqq); + bfqq = bfq_get_queue(bfqd, bio, is_sync, bic); + BUG_ON(!hlist_unhashed(&bfqq->burst_list_node)); + + bic_set_bfqq(bic, bfqq, is_sync); + if (split && is_sync) { + bfq_log_bfqq(bfqd, bfqq, + "set_request: was_in_list %d " + "was_in_large_burst %d " + "large burst in progress %d", + bic->was_in_burst_list, + bic->saved_in_large_burst, + bfqd->large_burst); + + if ((bic->was_in_burst_list && bfqd->large_burst) || + bic->saved_in_large_burst) { + bfq_log_bfqq(bfqd, bfqq, + "set_request: marking in " + "large burst"); + bfq_mark_bfqq_in_large_burst(bfqq); + } else { + bfq_log_bfqq(bfqd, bfqq, + "set_request: clearing in " + "large burst"); + bfq_clear_bfqq_in_large_burst(bfqq); + if (bic->was_in_burst_list) + /* + * If bfqq was in the current + * burst list before being + * merged, then we have to add + * it back. And we do not need + * to increase burst_size, as + * we did not decrement + * burst_size when we removed + * bfqq from the burst list as + * a consequence of a merge + * (see comments in + * bfq_put_queue). In this + * respect, it would be rather + * costly to know whether the + * current burst list is still + * the same burst list from + * which bfqq was removed on + * the merge. To avoid this + * cost, if bfqq was in a + * burst list, then we add + * bfqq to the current burst + * list without any further + * check. This can cause + * inappropriate insertions, + * but rarely enough to not + * harm the detection of large + * bursts significantly. + */ + hlist_add_head(&bfqq->burst_list_node, + &bfqd->burst_list); + } + bfqq->split_time = jiffies; + } + } else { + /* If the queue was seeky for too long, break it apart. */ + if (bfq_bfqq_coop(bfqq) && bfq_bfqq_split_coop(bfqq)) { + bfq_log_bfqq(bfqd, bfqq, "breaking apart bfqq"); + + /* Update bic before losing reference to bfqq */ + if (bfq_bfqq_in_large_burst(bfqq)) + bic->saved_in_large_burst = true; + + bfqq = bfq_split_bfqq(bic, bfqq); + split = true; + if (!bfqq) + goto new_queue; + else + bfqq_already_existing = true; + } + } + + bfqq->allocated[rw]++; + bfqq->ref++; + bfq_log_bfqq(bfqd, bfqq, "set_request: bfqq %p, %d", bfqq, bfqq->ref); + + rq->elv.priv[0] = bic; + rq->elv.priv[1] = bfqq; + + /* + * If a bfq_queue has only one process reference, it is owned + * by only one bfq_io_cq: we can set the bic field of the + * bfq_queue to the address of that structure. Also, if the + * queue has just been split, mark a flag so that the + * information is available to the other scheduler hooks. + */ + if (likely(bfqq != &bfqd->oom_bfqq) && bfqq_process_refs(bfqq) == 1) { + bfqq->bic = bic; + if (split) { + /* + * If the queue has just been split from a shared + * queue, restore the idle window and the possible + * weight raising period. + */ + bfq_bfqq_resume_state(bfqq, bfqd, bic, + bfqq_already_existing); + } + } + + if (unlikely(bfq_bfqq_just_created(bfqq))) + bfq_handle_burst(bfqd, bfqq); + + spin_unlock_irqrestore(q->queue_lock, flags); + + return 0; + +queue_fail: + bfq_schedule_dispatch(bfqd); + spin_unlock_irqrestore(q->queue_lock, flags); + + return 1; +} + +static void bfq_kick_queue(struct work_struct *work) +{ + struct bfq_data *bfqd = + container_of(work, struct bfq_data, unplug_work); + struct request_queue *q = bfqd->queue; + + spin_lock_irq(q->queue_lock); + __blk_run_queue(q); + spin_unlock_irq(q->queue_lock); +} + +/* + * Handler of the expiration of the timer running if the in-service queue + * is idling inside its time slice. + */ +static enum hrtimer_restart bfq_idle_slice_timer(struct hrtimer *timer) +{ + struct bfq_data *bfqd = container_of(timer, struct bfq_data, + idle_slice_timer); + struct bfq_queue *bfqq; + unsigned long flags; + enum bfqq_expiration reason; + + spin_lock_irqsave(bfqd->queue->queue_lock, flags); + + bfqq = bfqd->in_service_queue; + /* + * Theoretical race here: the in-service queue can be NULL or + * different from the queue that was idling if the timer handler + * spins on the queue_lock and a new request arrives for the + * current queue and there is a full dispatch cycle that changes + * the in-service queue. This can hardly happen, but in the worst + * case we just expire a queue too early. + */ + if (bfqq) { + bfq_log_bfqq(bfqd, bfqq, "slice_timer expired"); + bfq_clear_bfqq_wait_request(bfqq); + + if (bfq_bfqq_budget_timeout(bfqq)) + /* + * Also here the queue can be safely expired + * for budget timeout without wasting + * guarantees + */ + reason = BFQ_BFQQ_BUDGET_TIMEOUT; + else if (bfqq->queued[0] == 0 && bfqq->queued[1] == 0) + /* + * The queue may not be empty upon timer expiration, + * because we may not disable the timer when the + * first request of the in-service queue arrives + * during disk idling. + */ + reason = BFQ_BFQQ_TOO_IDLE; + else + goto schedule_dispatch; + + bfq_bfqq_expire(bfqd, bfqq, true, reason); + } + +schedule_dispatch: + bfq_schedule_dispatch(bfqd); + + spin_unlock_irqrestore(bfqd->queue->queue_lock, flags); + return HRTIMER_NORESTART; +} + +static void bfq_shutdown_timer_wq(struct bfq_data *bfqd) +{ + hrtimer_cancel(&bfqd->idle_slice_timer); + cancel_work_sync(&bfqd->unplug_work); +} + +static void __bfq_put_async_bfqq(struct bfq_data *bfqd, + struct bfq_queue **bfqq_ptr) +{ + struct bfq_group *root_group = bfqd->root_group; + struct bfq_queue *bfqq = *bfqq_ptr; + + bfq_log(bfqd, "put_async_bfqq: %p", bfqq); + if (bfqq) { + bfq_bfqq_move(bfqd, bfqq, root_group); + bfq_log_bfqq(bfqd, bfqq, "put_async_bfqq: putting %p, %d", + bfqq, bfqq->ref); + bfq_put_queue(bfqq); + *bfqq_ptr = NULL; + } +} + +/* + * Release all the bfqg references to its async queues. If we are + * deallocating the group these queues may still contain requests, so + * we reparent them to the root cgroup (i.e., the only one that will + * exist for sure until all the requests on a device are gone). + */ +static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg) +{ + int i, j; + + for (i = 0; i < 2; i++) + for (j = 0; j < IOPRIO_BE_NR; j++) + __bfq_put_async_bfqq(bfqd, &bfqg->async_bfqq[i][j]); + + __bfq_put_async_bfqq(bfqd, &bfqg->async_idle_bfqq); +} + +static void bfq_exit_queue(struct elevator_queue *e) +{ + struct bfq_data *bfqd = e->elevator_data; + struct request_queue *q = bfqd->queue; + struct bfq_queue *bfqq, *n; + + bfq_shutdown_timer_wq(bfqd); + + spin_lock_irq(q->queue_lock); + + BUG_ON(bfqd->in_service_queue); + list_for_each_entry_safe(bfqq, n, &bfqd->idle_list, bfqq_list) + bfq_deactivate_bfqq(bfqd, bfqq, false, false); + + spin_unlock_irq(q->queue_lock); + + bfq_shutdown_timer_wq(bfqd); + + BUG_ON(hrtimer_active(&bfqd->idle_slice_timer)); + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + blkcg_deactivate_policy(q, &blkcg_policy_bfq); +#else + bfq_put_async_queues(bfqd, bfqd->root_group); + kfree(bfqd->root_group); +#endif + + kfree(bfqd); +} + +static void bfq_init_root_group(struct bfq_group *root_group, + struct bfq_data *bfqd) +{ + int i; + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + root_group->entity.parent = NULL; + root_group->my_entity = NULL; + root_group->bfqd = bfqd; +#endif + root_group->rq_pos_tree = RB_ROOT; + for (i = 0; i < BFQ_IOPRIO_CLASSES; i++) + root_group->sched_data.service_tree[i] = BFQ_SERVICE_TREE_INIT; + root_group->sched_data.bfq_class_idle_last_service = jiffies; +} + +static int bfq_init_queue(struct request_queue *q, struct elevator_type *e) +{ + struct bfq_data *bfqd; + struct elevator_queue *eq; + + eq = elevator_alloc(q, e); + if (!eq) + return -ENOMEM; + + bfqd = kzalloc_node(sizeof(*bfqd), GFP_KERNEL, q->node); + if (!bfqd) { + kobject_put(&eq->kobj); + return -ENOMEM; + } + eq->elevator_data = bfqd; + + /* + * Our fallback bfqq if bfq_find_alloc_queue() runs into OOM issues. + * Grab a permanent reference to it, so that the normal code flow + * will not attempt to free it. + */ + bfq_init_bfqq(bfqd, &bfqd->oom_bfqq, NULL, 1, 0); + bfqd->oom_bfqq.ref++; + bfqd->oom_bfqq.new_ioprio = BFQ_DEFAULT_QUEUE_IOPRIO; + bfqd->oom_bfqq.new_ioprio_class = IOPRIO_CLASS_BE; + bfqd->oom_bfqq.entity.new_weight = + bfq_ioprio_to_weight(bfqd->oom_bfqq.new_ioprio); + + /* oom_bfqq does not participate to bursts */ + bfq_clear_bfqq_just_created(&bfqd->oom_bfqq); + /* + * Trigger weight initialization, according to ioprio, at the + * oom_bfqq's first activation. The oom_bfqq's ioprio and ioprio + * class won't be changed any more. + */ + bfqd->oom_bfqq.entity.prio_changed = 1; + + bfqd->queue = q; + + spin_lock_irq(q->queue_lock); + q->elevator = eq; + spin_unlock_irq(q->queue_lock); + + bfqd->root_group = bfq_create_group_hierarchy(bfqd, q->node); + if (!bfqd->root_group) + goto out_free; + bfq_init_root_group(bfqd->root_group, bfqd); + bfq_init_entity(&bfqd->oom_bfqq.entity, bfqd->root_group); + + hrtimer_init(&bfqd->idle_slice_timer, CLOCK_MONOTONIC, + HRTIMER_MODE_REL); + bfqd->idle_slice_timer.function = bfq_idle_slice_timer; + + bfqd->queue_weights_tree = RB_ROOT; + bfqd->group_weights_tree = RB_ROOT; + + INIT_WORK(&bfqd->unplug_work, bfq_kick_queue); + + INIT_LIST_HEAD(&bfqd->active_list); + INIT_LIST_HEAD(&bfqd->idle_list); + INIT_HLIST_HEAD(&bfqd->burst_list); + + bfqd->hw_tag = -1; + + bfqd->bfq_max_budget = bfq_default_max_budget; + + bfqd->bfq_fifo_expire[0] = bfq_fifo_expire[0]; + bfqd->bfq_fifo_expire[1] = bfq_fifo_expire[1]; + bfqd->bfq_back_max = bfq_back_max; + bfqd->bfq_back_penalty = bfq_back_penalty; + bfqd->bfq_slice_idle = bfq_slice_idle; + bfqd->bfq_timeout = bfq_timeout; + + bfqd->bfq_requests_within_timer = 120; + + bfqd->bfq_large_burst_thresh = 8; + bfqd->bfq_burst_interval = msecs_to_jiffies(180); + + bfqd->low_latency = true; + + /* + * Trade-off between responsiveness and fairness. + */ + bfqd->bfq_wr_coeff = 30; + bfqd->bfq_wr_rt_max_time = msecs_to_jiffies(300); + bfqd->bfq_wr_max_time = 0; + bfqd->bfq_wr_min_idle_time = msecs_to_jiffies(2000); + bfqd->bfq_wr_min_inter_arr_async = msecs_to_jiffies(500); + bfqd->bfq_wr_max_softrt_rate = 7000; /* + * Approximate rate required + * to playback or record a + * high-definition compressed + * video. + */ + bfqd->wr_busy_queues = 0; + + /* + * Begin by assuming, optimistically, that the device is a + * high-speed one, and that its peak rate is equal to 2/3 of + * the highest reference rate. + */ + bfqd->RT_prod = R_fast[blk_queue_nonrot(bfqd->queue)] * + T_fast[blk_queue_nonrot(bfqd->queue)]; + bfqd->peak_rate = R_fast[blk_queue_nonrot(bfqd->queue)] * 2 / 3; + bfqd->device_speed = BFQ_BFQD_FAST; + + return 0; + +out_free: + kfree(bfqd); + kobject_put(&eq->kobj); + return -ENOMEM; +} + +static void bfq_registered_queue(struct request_queue *q) +{ + wbt_disable_default(q); +} + +static void bfq_slab_kill(void) +{ + kmem_cache_destroy(bfq_pool); +} + +static int __init bfq_slab_setup(void) +{ + bfq_pool = KMEM_CACHE(bfq_queue, 0); + if (!bfq_pool) + return -ENOMEM; + return 0; +} + +static ssize_t bfq_var_show(unsigned int var, char *page) +{ + return sprintf(page, "%u\n", var); +} + +static ssize_t bfq_var_store(unsigned long *var, const char *page, + size_t count) +{ + unsigned long new_val; + int ret = kstrtoul(page, 10, &new_val); + + if (ret == 0) + *var = new_val; + + return count; +} + +static ssize_t bfq_wr_max_time_show(struct elevator_queue *e, char *page) +{ + struct bfq_data *bfqd = e->elevator_data; + + return sprintf(page, "%d\n", bfqd->bfq_wr_max_time > 0 ? + jiffies_to_msecs(bfqd->bfq_wr_max_time) : + jiffies_to_msecs(bfq_wr_duration(bfqd))); +} + +static ssize_t bfq_weights_show(struct elevator_queue *e, char *page) +{ + struct bfq_queue *bfqq; + struct bfq_data *bfqd = e->elevator_data; + ssize_t num_char = 0; + + num_char += sprintf(page + num_char, "Tot reqs queued %d\n\n", + bfqd->queued); + + spin_lock_irq(bfqd->queue->queue_lock); + + num_char += sprintf(page + num_char, "Active:\n"); + list_for_each_entry(bfqq, &bfqd->active_list, bfqq_list) { + num_char += sprintf(page + num_char, + "pid%d: weight %hu, nr_queued %d %d, ", + bfqq->pid, + bfqq->entity.weight, + bfqq->queued[0], + bfqq->queued[1]); + num_char += sprintf(page + num_char, + "dur %d/%u\n", + jiffies_to_msecs( + jiffies - + bfqq->last_wr_start_finish), + jiffies_to_msecs(bfqq->wr_cur_max_time)); + } + + num_char += sprintf(page + num_char, "Idle:\n"); + list_for_each_entry(bfqq, &bfqd->idle_list, bfqq_list) { + num_char += sprintf(page + num_char, + "pid%d: weight %hu, dur %d/%u\n", + bfqq->pid, + bfqq->entity.weight, + jiffies_to_msecs(jiffies - + bfqq->last_wr_start_finish), + jiffies_to_msecs(bfqq->wr_cur_max_time)); + } + + spin_unlock_irq(bfqd->queue->queue_lock); + + return num_char; +} + +#define SHOW_FUNCTION(__FUNC, __VAR, __CONV) \ +static ssize_t __FUNC(struct elevator_queue *e, char *page) \ +{ \ + struct bfq_data *bfqd = e->elevator_data; \ + u64 __data = __VAR; \ + if (__CONV == 1) \ + __data = jiffies_to_msecs(__data); \ + else if (__CONV == 2) \ + __data = div_u64(__data, NSEC_PER_MSEC); \ + return bfq_var_show(__data, (page)); \ +} +SHOW_FUNCTION(bfq_fifo_expire_sync_show, bfqd->bfq_fifo_expire[1], 2); +SHOW_FUNCTION(bfq_fifo_expire_async_show, bfqd->bfq_fifo_expire[0], 2); +SHOW_FUNCTION(bfq_back_seek_max_show, bfqd->bfq_back_max, 0); +SHOW_FUNCTION(bfq_back_seek_penalty_show, bfqd->bfq_back_penalty, 0); +SHOW_FUNCTION(bfq_slice_idle_show, bfqd->bfq_slice_idle, 2); +SHOW_FUNCTION(bfq_max_budget_show, bfqd->bfq_user_max_budget, 0); +SHOW_FUNCTION(bfq_timeout_sync_show, bfqd->bfq_timeout, 1); +SHOW_FUNCTION(bfq_strict_guarantees_show, bfqd->strict_guarantees, 0); +SHOW_FUNCTION(bfq_low_latency_show, bfqd->low_latency, 0); +SHOW_FUNCTION(bfq_wr_coeff_show, bfqd->bfq_wr_coeff, 0); +SHOW_FUNCTION(bfq_wr_rt_max_time_show, bfqd->bfq_wr_rt_max_time, 1); +SHOW_FUNCTION(bfq_wr_min_idle_time_show, bfqd->bfq_wr_min_idle_time, 1); +SHOW_FUNCTION(bfq_wr_min_inter_arr_async_show, bfqd->bfq_wr_min_inter_arr_async, + 1); +SHOW_FUNCTION(bfq_wr_max_softrt_rate_show, bfqd->bfq_wr_max_softrt_rate, 0); +#undef SHOW_FUNCTION + +#define USEC_SHOW_FUNCTION(__FUNC, __VAR) \ +static ssize_t __FUNC(struct elevator_queue *e, char *page) \ +{ \ + struct bfq_data *bfqd = e->elevator_data; \ + u64 __data = __VAR; \ + __data = div_u64(__data, NSEC_PER_USEC); \ + return bfq_var_show(__data, (page)); \ +} +USEC_SHOW_FUNCTION(bfq_slice_idle_us_show, bfqd->bfq_slice_idle); +#undef USEC_SHOW_FUNCTION + +#define STORE_FUNCTION(__FUNC, __PTR, MIN, MAX, __CONV) \ +static ssize_t \ +__FUNC(struct elevator_queue *e, const char *page, size_t count) \ +{ \ + struct bfq_data *bfqd = e->elevator_data; \ + unsigned long uninitialized_var(__data); \ + int ret = bfq_var_store(&__data, (page), count); \ + if (__data < (MIN)) \ + __data = (MIN); \ + else if (__data > (MAX)) \ + __data = (MAX); \ + if (__CONV == 1) \ + *(__PTR) = msecs_to_jiffies(__data); \ + else if (__CONV == 2) \ + *(__PTR) = (u64)__data * NSEC_PER_MSEC; \ + else \ + *(__PTR) = __data; \ + return ret; \ +} +STORE_FUNCTION(bfq_fifo_expire_sync_store, &bfqd->bfq_fifo_expire[1], 1, + INT_MAX, 2); +STORE_FUNCTION(bfq_fifo_expire_async_store, &bfqd->bfq_fifo_expire[0], 1, + INT_MAX, 2); +STORE_FUNCTION(bfq_back_seek_max_store, &bfqd->bfq_back_max, 0, INT_MAX, 0); +STORE_FUNCTION(bfq_back_seek_penalty_store, &bfqd->bfq_back_penalty, 1, + INT_MAX, 0); +STORE_FUNCTION(bfq_slice_idle_store, &bfqd->bfq_slice_idle, 0, INT_MAX, 2); +STORE_FUNCTION(bfq_wr_coeff_store, &bfqd->bfq_wr_coeff, 1, INT_MAX, 0); +STORE_FUNCTION(bfq_wr_max_time_store, &bfqd->bfq_wr_max_time, 0, INT_MAX, 1); +STORE_FUNCTION(bfq_wr_rt_max_time_store, &bfqd->bfq_wr_rt_max_time, 0, INT_MAX, + 1); +STORE_FUNCTION(bfq_wr_min_idle_time_store, &bfqd->bfq_wr_min_idle_time, 0, + INT_MAX, 1); +STORE_FUNCTION(bfq_wr_min_inter_arr_async_store, + &bfqd->bfq_wr_min_inter_arr_async, 0, INT_MAX, 1); +STORE_FUNCTION(bfq_wr_max_softrt_rate_store, &bfqd->bfq_wr_max_softrt_rate, 0, + INT_MAX, 0); +#undef STORE_FUNCTION + +#define USEC_STORE_FUNCTION(__FUNC, __PTR, MIN, MAX) \ +static ssize_t __FUNC(struct elevator_queue *e, const char *page, size_t count)\ +{ \ + struct bfq_data *bfqd = e->elevator_data; \ + unsigned long uninitialized_var(__data); \ + int ret = bfq_var_store(&__data, (page), count); \ + if (__data < (MIN)) \ + __data = (MIN); \ + else if (__data > (MAX)) \ + __data = (MAX); \ + *(__PTR) = (u64)__data * NSEC_PER_USEC; \ + return ret; \ +} +USEC_STORE_FUNCTION(bfq_slice_idle_us_store, &bfqd->bfq_slice_idle, 0, + UINT_MAX); +#undef USEC_STORE_FUNCTION + +/* do nothing for the moment */ +static ssize_t bfq_weights_store(struct elevator_queue *e, + const char *page, size_t count) +{ + return count; +} + +static ssize_t bfq_max_budget_store(struct elevator_queue *e, + const char *page, size_t count) +{ + struct bfq_data *bfqd = e->elevator_data; + unsigned long uninitialized_var(__data); + int ret = bfq_var_store(&__data, (page), count); + + if (__data == 0) + bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd); + else { + if (__data > INT_MAX) + __data = INT_MAX; + bfqd->bfq_max_budget = __data; + } + + bfqd->bfq_user_max_budget = __data; + + return ret; +} + +/* + * Leaving this name to preserve name compatibility with cfq + * parameters, but this timeout is used for both sync and async. + */ +static ssize_t bfq_timeout_sync_store(struct elevator_queue *e, + const char *page, size_t count) +{ + struct bfq_data *bfqd = e->elevator_data; + unsigned long uninitialized_var(__data); + int ret = bfq_var_store(&__data, (page), count); + + if (__data < 1) + __data = 1; + else if (__data > INT_MAX) + __data = INT_MAX; + + bfqd->bfq_timeout = msecs_to_jiffies(__data); + if (bfqd->bfq_user_max_budget == 0) + bfqd->bfq_max_budget = bfq_calc_max_budget(bfqd); + + return ret; +} + +static ssize_t bfq_strict_guarantees_store(struct elevator_queue *e, + const char *page, size_t count) +{ + struct bfq_data *bfqd = e->elevator_data; + unsigned long uninitialized_var(__data); + int ret = bfq_var_store(&__data, (page), count); + + if (__data > 1) + __data = 1; + if (!bfqd->strict_guarantees && __data == 1 + && bfqd->bfq_slice_idle < 8 * NSEC_PER_MSEC) + bfqd->bfq_slice_idle = 8 * NSEC_PER_MSEC; + + bfqd->strict_guarantees = __data; + + return ret; +} + +static ssize_t bfq_low_latency_store(struct elevator_queue *e, + const char *page, size_t count) +{ + struct bfq_data *bfqd = e->elevator_data; + unsigned long uninitialized_var(__data); + int ret = bfq_var_store(&__data, (page), count); + + if (__data > 1) + __data = 1; + if (__data == 0 && bfqd->low_latency != 0) + bfq_end_wr(bfqd); + bfqd->low_latency = __data; + + return ret; +} + +#define BFQ_ATTR(name) \ + __ATTR(name, S_IRUGO|S_IWUSR, bfq_##name##_show, bfq_##name##_store) + +static struct elv_fs_entry bfq_attrs[] = { + BFQ_ATTR(fifo_expire_sync), + BFQ_ATTR(fifo_expire_async), + BFQ_ATTR(back_seek_max), + BFQ_ATTR(back_seek_penalty), + BFQ_ATTR(slice_idle), + BFQ_ATTR(slice_idle_us), + BFQ_ATTR(max_budget), + BFQ_ATTR(timeout_sync), + BFQ_ATTR(strict_guarantees), + BFQ_ATTR(low_latency), + BFQ_ATTR(wr_coeff), + BFQ_ATTR(wr_max_time), + BFQ_ATTR(wr_rt_max_time), + BFQ_ATTR(wr_min_idle_time), + BFQ_ATTR(wr_min_inter_arr_async), + BFQ_ATTR(wr_max_softrt_rate), + BFQ_ATTR(weights), + __ATTR_NULL +}; + +static struct elevator_type iosched_bfq = { + .ops.sq = { + .elevator_merge_fn = bfq_merge, + .elevator_merged_fn = bfq_merged_request, + .elevator_merge_req_fn = bfq_merged_requests, +#ifdef BFQ_GROUP_IOSCHED_ENABLED + .elevator_bio_merged_fn = bfq_bio_merged, +#endif + .elevator_allow_bio_merge_fn = bfq_allow_bio_merge, + .elevator_allow_rq_merge_fn = bfq_allow_rq_merge, + .elevator_dispatch_fn = bfq_dispatch_requests, + .elevator_add_req_fn = bfq_insert_request, + .elevator_activate_req_fn = bfq_activate_request, + .elevator_deactivate_req_fn = bfq_deactivate_request, + .elevator_completed_req_fn = bfq_completed_request, + .elevator_former_req_fn = elv_rb_former_request, + .elevator_latter_req_fn = elv_rb_latter_request, + .elevator_init_icq_fn = bfq_init_icq, + .elevator_exit_icq_fn = bfq_exit_icq, + .elevator_set_req_fn = bfq_set_request, + .elevator_put_req_fn = bfq_put_request, + .elevator_may_queue_fn = bfq_may_queue, + .elevator_init_fn = bfq_init_queue, + .elevator_exit_fn = bfq_exit_queue, + .elevator_registered_fn = bfq_registered_queue, + }, + .icq_size = sizeof(struct bfq_io_cq), + .icq_align = __alignof__(struct bfq_io_cq), + .elevator_attrs = bfq_attrs, + .elevator_name = "bfq-sq", + .elevator_owner = THIS_MODULE, +}; + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +static struct blkcg_policy blkcg_policy_bfq = { + .dfl_cftypes = bfq_blkg_files, + .legacy_cftypes = bfq_blkcg_legacy_files, + + .cpd_alloc_fn = bfq_cpd_alloc, + .cpd_init_fn = bfq_cpd_init, + .cpd_bind_fn = bfq_cpd_init, + .cpd_free_fn = bfq_cpd_free, + + .pd_alloc_fn = bfq_pd_alloc, + .pd_init_fn = bfq_pd_init, + .pd_offline_fn = bfq_pd_offline, + .pd_free_fn = bfq_pd_free, + .pd_reset_stats_fn = bfq_pd_reset_stats, +}; +#endif + +static int __init bfq_init(void) +{ + int ret; + char msg[60] = "BFQ I/O-scheduler: v8r12"; + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + ret = blkcg_policy_register(&blkcg_policy_bfq); + if (ret) + return ret; +#endif + + ret = -ENOMEM; + if (bfq_slab_setup()) + goto err_pol_unreg; + + /* + * Times to load large popular applications for the typical + * systems installed on the reference devices (see the + * comments before the definitions of the next two + * arrays). Actually, we use slightly slower values, as the + * estimated peak rate tends to be smaller than the actual + * peak rate. The reason for this last fact is that estimates + * are computed over much shorter time intervals than the long + * intervals typically used for benchmarking. Why? First, to + * adapt more quickly to variations. Second, because an I/O + * scheduler cannot rely on a peak-rate-evaluation workload to + * be run for a long time. + */ + T_slow[0] = msecs_to_jiffies(3500); /* actually 4 sec */ + T_slow[1] = msecs_to_jiffies(6000); /* actually 6.5 sec */ + T_fast[0] = msecs_to_jiffies(7000); /* actually 8 sec */ + T_fast[1] = msecs_to_jiffies(2500); /* actually 3 sec */ + + /* + * Thresholds that determine the switch between speed classes + * (see the comments before the definition of the array + * device_speed_thresh). These thresholds are biased towards + * transitions to the fast class. This is safer than the + * opposite bias. In fact, a wrong transition to the slow + * class results in short weight-raising periods, because the + * speed of the device then tends to be higher that the + * reference peak rate. On the opposite end, a wrong + * transition to the fast class tends to increase + * weight-raising periods, because of the opposite reason. + */ + device_speed_thresh[0] = (4 * R_slow[0]) / 3; + device_speed_thresh[1] = (4 * R_slow[1]) / 3; + + ret = elv_register(&iosched_bfq); + if (ret) + goto err_pol_unreg; + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + strcat(msg, " (with cgroups support)"); +#endif + pr_info("%s", msg); + + return 0; + +err_pol_unreg: +#ifdef BFQ_GROUP_IOSCHED_ENABLED + blkcg_policy_unregister(&blkcg_policy_bfq); +#endif + return ret; +} + +static void __exit bfq_exit(void) +{ + elv_unregister(&iosched_bfq); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + blkcg_policy_unregister(&blkcg_policy_bfq); +#endif + bfq_slab_kill(); +} + +module_init(bfq_init); +module_exit(bfq_exit); + +MODULE_AUTHOR("Arianna Avanzini, Fabio Checconi, Paolo Valente"); +MODULE_LICENSE("GPL"); diff --git b/block/bfq.h b/block/bfq.h new file mode 100644 index 0000000..47cd4d5 --- /dev/null +++ b/block/bfq.h @@ -0,0 +1,989 @@ +/* + * BFQ v8r12 for 4.11.0: data structures and common functions prototypes. + * + * Based on ideas and code from CFQ: + * Copyright (C) 2003 Jens Axboe + * + * Copyright (C) 2008 Fabio Checconi + * Paolo Valente + * + * Copyright (C) 2015 Paolo Valente + * + * Copyright (C) 2017 Paolo Valente + */ + +#ifndef _BFQ_H +#define _BFQ_H + +#include +#include + +/* + * Define an alternative macro to compile cgroups support. This is one + * of the steps needed to let bfq-mq share the files bfq-sched.c and + * bfq-cgroup.c with bfq-sq. For bfq-mq, the macro + * BFQ_GROUP_IOSCHED_ENABLED will be defined as a function of whether + * the configuration option CONFIG_BFQ_MQ_GROUP_IOSCHED, and not + * CONFIG_BFQ_GROUP_IOSCHED, is defined. + */ +#ifdef CONFIG_BFQ_SQ_GROUP_IOSCHED +#define BFQ_GROUP_IOSCHED_ENABLED +#endif + +#define BFQ_IOPRIO_CLASSES 3 +#define BFQ_CL_IDLE_TIMEOUT (HZ/5) + +#define BFQ_MIN_WEIGHT 1 +#define BFQ_MAX_WEIGHT 1000 +#define BFQ_WEIGHT_CONVERSION_COEFF 10 + +#define BFQ_DEFAULT_QUEUE_IOPRIO 4 + +#define BFQ_WEIGHT_LEGACY_DFL 100 +#define BFQ_DEFAULT_GRP_IOPRIO 0 +#define BFQ_DEFAULT_GRP_CLASS IOPRIO_CLASS_BE + +/* + * Soft real-time applications are extremely more latency sensitive + * than interactive ones. Over-raise the weight of the former to + * privilege them against the latter. + */ +#define BFQ_SOFTRT_WEIGHT_FACTOR 100 + +struct bfq_entity; + +/** + * struct bfq_service_tree - per ioprio_class service tree. + * + * Each service tree represents a B-WF2Q+ scheduler on its own. Each + * ioprio_class has its own independent scheduler, and so its own + * bfq_service_tree. All the fields are protected by the queue lock + * of the containing bfqd. + */ +struct bfq_service_tree { + /* tree for active entities (i.e., those backlogged) */ + struct rb_root active; + /* tree for idle entities (i.e., not backlogged, with V <= F_i)*/ + struct rb_root idle; + + struct bfq_entity *first_idle; /* idle entity with minimum F_i */ + struct bfq_entity *last_idle; /* idle entity with maximum F_i */ + + u64 vtime; /* scheduler virtual time */ + /* scheduler weight sum; active and idle entities contribute to it */ + unsigned long wsum; +}; + +/** + * struct bfq_sched_data - multi-class scheduler. + * + * bfq_sched_data is the basic scheduler queue. It supports three + * ioprio_classes, and can be used either as a toplevel queue or as an + * intermediate queue in a hierarchical setup. + * + * The supported ioprio_classes are the same as in CFQ, in descending + * priority order, IOPRIO_CLASS_RT, IOPRIO_CLASS_BE, IOPRIO_CLASS_IDLE. + * Requests from higher priority queues are served before all the + * requests from lower priority queues; among requests of the same + * queue requests are served according to B-WF2Q+. + * + * The schedule is implemented by the service trees, plus the field + * @next_in_service, which points to the entity on the active trees + * that will be served next, if 1) no changes in the schedule occurs + * before the current in-service entity is expired, 2) the in-service + * queue becomes idle when it expires, and 3) if the entity pointed by + * in_service_entity is not a queue, then the in-service child entity + * of the entity pointed by in_service_entity becomes idle on + * expiration. This peculiar definition allows for the following + * optimization, not yet exploited: while a given entity is still in + * service, we already know which is the best candidate for next + * service among the other active entitities in the same parent + * entity. We can then quickly compare the timestamps of the + * in-service entity with those of such best candidate. + * + * All the fields are protected by the queue lock of the containing + * bfqd. + */ +struct bfq_sched_data { + struct bfq_entity *in_service_entity; /* entity in service */ + /* head-of-the-line entity in the scheduler (see comments above) */ + struct bfq_entity *next_in_service; + /* array of service trees, one per ioprio_class */ + struct bfq_service_tree service_tree[BFQ_IOPRIO_CLASSES]; + /* last time CLASS_IDLE was served */ + unsigned long bfq_class_idle_last_service; + +}; + +/** + * struct bfq_weight_counter - counter of the number of all active entities + * with a given weight. + */ +struct bfq_weight_counter { + unsigned int weight; /* weight of the entities this counter refers to */ + unsigned int num_active; /* nr of active entities with this weight */ + /* + * Weights tree member (see bfq_data's @queue_weights_tree and + * @group_weights_tree) + */ + struct rb_node weights_node; +}; + +/** + * struct bfq_entity - schedulable entity. + * + * A bfq_entity is used to represent either a bfq_queue (leaf node in the + * cgroup hierarchy) or a bfq_group into the upper level scheduler. Each + * entity belongs to the sched_data of the parent group in the cgroup + * hierarchy. Non-leaf entities have also their own sched_data, stored + * in @my_sched_data. + * + * Each entity stores independently its priority values; this would + * allow different weights on different devices, but this + * functionality is not exported to userspace by now. Priorities and + * weights are updated lazily, first storing the new values into the + * new_* fields, then setting the @prio_changed flag. As soon as + * there is a transition in the entity state that allows the priority + * update to take place the effective and the requested priority + * values are synchronized. + * + * Unless cgroups are used, the weight value is calculated from the + * ioprio to export the same interface as CFQ. When dealing with + * ``well-behaved'' queues (i.e., queues that do not spend too much + * time to consume their budget and have true sequential behavior, and + * when there are no external factors breaking anticipation) the + * relative weights at each level of the cgroups hierarchy should be + * guaranteed. All the fields are protected by the queue lock of the + * containing bfqd. + */ +struct bfq_entity { + struct rb_node rb_node; /* service_tree member */ + /* pointer to the weight counter associated with this entity */ + struct bfq_weight_counter *weight_counter; + + /* + * Flag, true if the entity is on a tree (either the active or + * the idle one of its service_tree) or is in service. + */ + bool on_st; + + u64 finish; /* B-WF2Q+ finish timestamp (aka F_i) */ + u64 start; /* B-WF2Q+ start timestamp (aka S_i) */ + + /* tree the entity is enqueued into; %NULL if not on a tree */ + struct rb_root *tree; + + /* + * minimum start time of the (active) subtree rooted at this + * entity; used for O(log N) lookups into active trees + */ + u64 min_start; + + /* amount of service received during the last service slot */ + int service; + + /* budget, used also to calculate F_i: F_i = S_i + @budget / @weight */ + int budget; + + unsigned int weight; /* weight of the queue */ + unsigned int new_weight; /* next weight if a change is in progress */ + + /* original weight, used to implement weight boosting */ + unsigned int orig_weight; + + /* parent entity, for hierarchical scheduling */ + struct bfq_entity *parent; + + /* + * For non-leaf nodes in the hierarchy, the associated + * scheduler queue, %NULL on leaf nodes. + */ + struct bfq_sched_data *my_sched_data; + /* the scheduler queue this entity belongs to */ + struct bfq_sched_data *sched_data; + + /* flag, set to request a weight, ioprio or ioprio_class change */ + int prio_changed; +}; + +struct bfq_group; + +/** + * struct bfq_queue - leaf schedulable entity. + * + * A bfq_queue is a leaf request queue; it can be associated with an + * io_context or more, if it is async or shared between cooperating + * processes. @cgroup holds a reference to the cgroup, to be sure that it + * does not disappear while a bfqq still references it (mostly to avoid + * races between request issuing and task migration followed by cgroup + * destruction). + * All the fields are protected by the queue lock of the containing bfqd. + */ +struct bfq_queue { + /* reference counter */ + int ref; + /* parent bfq_data */ + struct bfq_data *bfqd; + + /* current ioprio and ioprio class */ + unsigned short ioprio, ioprio_class; + /* next ioprio and ioprio class if a change is in progress */ + unsigned short new_ioprio, new_ioprio_class; + + /* + * Shared bfq_queue if queue is cooperating with one or more + * other queues. + */ + struct bfq_queue *new_bfqq; + /* request-position tree member (see bfq_group's @rq_pos_tree) */ + struct rb_node pos_node; + /* request-position tree root (see bfq_group's @rq_pos_tree) */ + struct rb_root *pos_root; + + /* sorted list of pending requests */ + struct rb_root sort_list; + /* if fifo isn't expired, next request to serve */ + struct request *next_rq; + /* number of sync and async requests queued */ + int queued[2]; + /* number of sync and async requests currently allocated */ + int allocated[2]; + /* number of pending metadata requests */ + int meta_pending; + /* fifo list of requests in sort_list */ + struct list_head fifo; + + /* entity representing this queue in the scheduler */ + struct bfq_entity entity; + + /* maximum budget allowed from the feedback mechanism */ + int max_budget; + /* budget expiration (in jiffies) */ + unsigned long budget_timeout; + + /* number of requests on the dispatch list or inside driver */ + int dispatched; + + unsigned int flags; /* status flags.*/ + + /* node for active/idle bfqq list inside parent bfqd */ + struct list_head bfqq_list; + + /* bit vector: a 1 for each seeky requests in history */ + u32 seek_history; + + /* node for the device's burst list */ + struct hlist_node burst_list_node; + + /* position of the last request enqueued */ + sector_t last_request_pos; + + /* Number of consecutive pairs of request completion and + * arrival, such that the queue becomes idle after the + * completion, but the next request arrives within an idle + * time slice; used only if the queue's IO_bound flag has been + * cleared. + */ + unsigned int requests_within_timer; + + /* pid of the process owning the queue, used for logging purposes */ + pid_t pid; + + /* + * Pointer to the bfq_io_cq owning the bfq_queue, set to %NULL + * if the queue is shared. + */ + struct bfq_io_cq *bic; + + /* current maximum weight-raising time for this queue */ + unsigned long wr_cur_max_time; + /* + * Minimum time instant such that, only if a new request is + * enqueued after this time instant in an idle @bfq_queue with + * no outstanding requests, then the task associated with the + * queue it is deemed as soft real-time (see the comments on + * the function bfq_bfqq_softrt_next_start()) + */ + unsigned long soft_rt_next_start; + /* + * Start time of the current weight-raising period if + * the @bfq-queue is being weight-raised, otherwise + * finish time of the last weight-raising period. + */ + unsigned long last_wr_start_finish; + /* factor by which the weight of this queue is multiplied */ + unsigned int wr_coeff; + /* + * Time of the last transition of the @bfq_queue from idle to + * backlogged. + */ + unsigned long last_idle_bklogged; + /* + * Cumulative service received from the @bfq_queue since the + * last transition from idle to backlogged. + */ + unsigned long service_from_backlogged; + /* + * Value of wr start time when switching to soft rt + */ + unsigned long wr_start_at_switch_to_srt; + + unsigned long split_time; /* time of last split */ +}; + +/** + * struct bfq_ttime - per process thinktime stats. + */ +struct bfq_ttime { + u64 last_end_request; /* completion time of last request */ + + u64 ttime_total; /* total process thinktime */ + unsigned long ttime_samples; /* number of thinktime samples */ + u64 ttime_mean; /* average process thinktime */ + +}; + +/** + * struct bfq_io_cq - per (request_queue, io_context) structure. + */ +struct bfq_io_cq { + /* associated io_cq structure */ + struct io_cq icq; /* must be the first member */ + /* array of two process queues, the sync and the async */ + struct bfq_queue *bfqq[2]; + /* associated @bfq_ttime struct */ + struct bfq_ttime ttime; + /* per (request_queue, blkcg) ioprio */ + int ioprio; +#ifdef BFQ_GROUP_IOSCHED_ENABLED + uint64_t blkcg_serial_nr; /* the current blkcg serial */ +#endif + + /* + * Snapshot of the has_short_time flag before merging; taken + * to remember its value while the queue is merged, so as to + * be able to restore it in case of split. + */ + bool saved_has_short_ttime; + /* + * Same purpose as the previous two fields for the I/O bound + * classification of a queue. + */ + bool saved_IO_bound; + + /* + * Same purpose as the previous fields for the value of the + * field keeping the queue's belonging to a large burst + */ + bool saved_in_large_burst; + /* + * True if the queue belonged to a burst list before its merge + * with another cooperating queue. + */ + bool was_in_burst_list; + + /* + * Similar to previous fields: save wr information. + */ + unsigned long saved_wr_coeff; + unsigned long saved_last_wr_start_finish; + unsigned long saved_wr_start_at_switch_to_srt; + unsigned int saved_wr_cur_max_time; +}; + +enum bfq_device_speed { + BFQ_BFQD_FAST, + BFQ_BFQD_SLOW, +}; + +/** + * struct bfq_data - per-device data structure. + * + * All the fields are protected by the @queue lock. + */ +struct bfq_data { + /* request queue for the device */ + struct request_queue *queue; + + /* root bfq_group for the device */ + struct bfq_group *root_group; + + /* + * rbtree of weight counters of @bfq_queues, sorted by + * weight. Used to keep track of whether all @bfq_queues have + * the same weight. The tree contains one counter for each + * distinct weight associated to some active and not + * weight-raised @bfq_queue (see the comments to the functions + * bfq_weights_tree_[add|remove] for further details). + */ + struct rb_root queue_weights_tree; + /* + * rbtree of non-queue @bfq_entity weight counters, sorted by + * weight. Used to keep track of whether all @bfq_groups have + * the same weight. The tree contains one counter for each + * distinct weight associated to some active @bfq_group (see + * the comments to the functions bfq_weights_tree_[add|remove] + * for further details). + */ + struct rb_root group_weights_tree; + + /* + * Number of bfq_queues containing requests (including the + * queue in service, even if it is idling). + */ + int busy_queues; + /* number of weight-raised busy @bfq_queues */ + int wr_busy_queues; + /* number of queued requests */ + int queued; + /* number of requests dispatched and waiting for completion */ + int rq_in_driver; + + /* + * Maximum number of requests in driver in the last + * @hw_tag_samples completed requests. + */ + int max_rq_in_driver; + /* number of samples used to calculate hw_tag */ + int hw_tag_samples; + /* flag set to one if the driver is showing a queueing behavior */ + int hw_tag; + + /* number of budgets assigned */ + int budgets_assigned; + + /* + * Timer set when idling (waiting) for the next request from + * the queue in service. + */ + struct hrtimer idle_slice_timer; + /* delayed work to restart dispatching on the request queue */ + struct work_struct unplug_work; + + /* bfq_queue in service */ + struct bfq_queue *in_service_queue; + /* bfq_io_cq (bic) associated with the @in_service_queue */ + struct bfq_io_cq *in_service_bic; + + /* on-disk position of the last served request */ + sector_t last_position; + + /* time of last request completion (ns) */ + u64 last_completion; + + /* time of first rq dispatch in current observation interval (ns) */ + u64 first_dispatch; + /* time of last rq dispatch in current observation interval (ns) */ + u64 last_dispatch; + + /* beginning of the last budget */ + ktime_t last_budget_start; + /* beginning of the last idle slice */ + ktime_t last_idling_start; + + /* number of samples in current observation interval */ + int peak_rate_samples; + /* num of samples of seq dispatches in current observation interval */ + u32 sequential_samples; + /* total num of sectors transferred in current observation interval */ + u64 tot_sectors_dispatched; + /* max rq size seen during current observation interval (sectors) */ + u32 last_rq_max_size; + /* time elapsed from first dispatch in current observ. interval (us) */ + u64 delta_from_first; + /* current estimate of device peak rate */ + u32 peak_rate; + + /* maximum budget allotted to a bfq_queue before rescheduling */ + int bfq_max_budget; + + /* list of all the bfq_queues active on the device */ + struct list_head active_list; + /* list of all the bfq_queues idle on the device */ + struct list_head idle_list; + + /* + * Timeout for async/sync requests; when it fires, requests + * are served in fifo order. + */ + u64 bfq_fifo_expire[2]; + /* weight of backward seeks wrt forward ones */ + unsigned int bfq_back_penalty; + /* maximum allowed backward seek */ + unsigned int bfq_back_max; + /* maximum idling time */ + u32 bfq_slice_idle; + + /* user-configured max budget value (0 for auto-tuning) */ + int bfq_user_max_budget; + /* + * Timeout for bfq_queues to consume their budget; used to + * prevent seeky queues from imposing long latencies to + * sequential or quasi-sequential ones (this also implies that + * seeky queues cannot receive guarantees in the service + * domain; after a timeout they are charged for the time they + * have been in service, to preserve fairness among them, but + * without service-domain guarantees). + */ + unsigned int bfq_timeout; + + /* + * Number of consecutive requests that must be issued within + * the idle time slice to set again idling to a queue which + * was marked as non-I/O-bound (see the definition of the + * IO_bound flag for further details). + */ + unsigned int bfq_requests_within_timer; + + /* + * Force device idling whenever needed to provide accurate + * service guarantees, without caring about throughput + * issues. CAVEAT: this may even increase latencies, in case + * of useless idling for processes that did stop doing I/O. + */ + bool strict_guarantees; + + /* + * Last time at which a queue entered the current burst of + * queues being activated shortly after each other; for more + * details about this and the following parameters related to + * a burst of activations, see the comments on the function + * bfq_handle_burst. + */ + unsigned long last_ins_in_burst; + /* + * Reference time interval used to decide whether a queue has + * been activated shortly after @last_ins_in_burst. + */ + unsigned long bfq_burst_interval; + /* number of queues in the current burst of queue activations */ + int burst_size; + + /* common parent entity for the queues in the burst */ + struct bfq_entity *burst_parent_entity; + /* Maximum burst size above which the current queue-activation + * burst is deemed as 'large'. + */ + unsigned long bfq_large_burst_thresh; + /* true if a large queue-activation burst is in progress */ + bool large_burst; + /* + * Head of the burst list (as for the above fields, more + * details in the comments on the function bfq_handle_burst). + */ + struct hlist_head burst_list; + + /* if set to true, low-latency heuristics are enabled */ + bool low_latency; + /* + * Maximum factor by which the weight of a weight-raised queue + * is multiplied. + */ + unsigned int bfq_wr_coeff; + /* maximum duration of a weight-raising period (jiffies) */ + unsigned int bfq_wr_max_time; + + /* Maximum weight-raising duration for soft real-time processes */ + unsigned int bfq_wr_rt_max_time; + /* + * Minimum idle period after which weight-raising may be + * reactivated for a queue (in jiffies). + */ + unsigned int bfq_wr_min_idle_time; + /* + * Minimum period between request arrivals after which + * weight-raising may be reactivated for an already busy async + * queue (in jiffies). + */ + unsigned long bfq_wr_min_inter_arr_async; + + /* Max service-rate for a soft real-time queue, in sectors/sec */ + unsigned int bfq_wr_max_softrt_rate; + /* + * Cached value of the product R*T, used for computing the + * maximum duration of weight raising automatically. + */ + u64 RT_prod; + /* device-speed class for the low-latency heuristic */ + enum bfq_device_speed device_speed; + + /* fallback dummy bfqq for extreme OOM conditions */ + struct bfq_queue oom_bfqq; +}; + +enum bfqq_state_flags { + BFQ_BFQQ_FLAG_just_created = 0, /* queue just allocated */ + BFQ_BFQQ_FLAG_busy, /* has requests or is in service */ + BFQ_BFQQ_FLAG_wait_request, /* waiting for a request */ + BFQ_BFQQ_FLAG_non_blocking_wait_rq, /* + * waiting for a request + * without idling the device + */ + BFQ_BFQQ_FLAG_must_alloc, /* must be allowed rq alloc */ + BFQ_BFQQ_FLAG_fifo_expire, /* FIFO checked in this slice */ + BFQ_BFQQ_FLAG_has_short_ttime, /* queue has a short think time */ + BFQ_BFQQ_FLAG_sync, /* synchronous queue */ + BFQ_BFQQ_FLAG_IO_bound, /* + * bfqq has timed-out at least once + * having consumed at most 2/10 of + * its budget + */ + BFQ_BFQQ_FLAG_in_large_burst, /* + * bfqq activated in a large burst, + * see comments to bfq_handle_burst. + */ + BFQ_BFQQ_FLAG_softrt_update, /* + * may need softrt-next-start + * update + */ + BFQ_BFQQ_FLAG_coop, /* bfqq is shared */ + BFQ_BFQQ_FLAG_split_coop /* shared bfqq will be split */ +}; + +#define BFQ_BFQQ_FNS(name) \ +static void bfq_mark_bfqq_##name(struct bfq_queue *bfqq) \ +{ \ + (bfqq)->flags |= (1 << BFQ_BFQQ_FLAG_##name); \ +} \ +static void bfq_clear_bfqq_##name(struct bfq_queue *bfqq) \ +{ \ + (bfqq)->flags &= ~(1 << BFQ_BFQQ_FLAG_##name); \ +} \ +static int bfq_bfqq_##name(const struct bfq_queue *bfqq) \ +{ \ + return ((bfqq)->flags & (1 << BFQ_BFQQ_FLAG_##name)) != 0; \ +} + +BFQ_BFQQ_FNS(just_created); +BFQ_BFQQ_FNS(busy); +BFQ_BFQQ_FNS(wait_request); +BFQ_BFQQ_FNS(non_blocking_wait_rq); +BFQ_BFQQ_FNS(must_alloc); +BFQ_BFQQ_FNS(fifo_expire); +BFQ_BFQQ_FNS(has_short_ttime); +BFQ_BFQQ_FNS(sync); +BFQ_BFQQ_FNS(IO_bound); +BFQ_BFQQ_FNS(in_large_burst); +BFQ_BFQQ_FNS(coop); +BFQ_BFQQ_FNS(split_coop); +BFQ_BFQQ_FNS(softrt_update); +#undef BFQ_BFQQ_FNS + +/* Logging facilities. */ +#ifdef CONFIG_BFQ_REDIRECT_TO_CONSOLE + +static const char *checked_dev_name(const struct device *dev) +{ + static const char nodev[] = "nodev"; + + if (dev) + return dev_name(dev); + + return nodev; +} + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +static struct bfq_group *bfqq_group(struct bfq_queue *bfqq); +static struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg); + +#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) do { \ + char __pbuf[128]; \ + \ + assert_spin_locked((bfqd)->queue->queue_lock); \ + blkg_path(bfqg_to_blkg(bfqq_group(bfqq)), __pbuf, sizeof(__pbuf)); \ + pr_crit("%s bfq%d%c %s " fmt "\n", \ + checked_dev_name((bfqd)->queue->backing_dev_info->dev), \ + (bfqq)->pid, \ + bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \ + __pbuf, ##args); \ +} while (0) + +#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do { \ + char __pbuf[128]; \ + \ + blkg_path(bfqg_to_blkg(bfqg), __pbuf, sizeof(__pbuf)); \ + pr_crit("%s %s " fmt "\n", \ + checked_dev_name((bfqd)->queue->backing_dev_info->dev), \ + __pbuf, ##args); \ +} while (0) + +#else /* BFQ_GROUP_IOSCHED_ENABLED */ + +#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \ + pr_crit("%s bfq%d%c " fmt "\n", \ + checked_dev_name((bfqd)->queue->backing_dev_info->dev), \ + (bfqq)->pid, bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \ + ##args) +#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do {} while (0) + +#endif /* BFQ_GROUP_IOSCHED_ENABLED */ + +#define bfq_log(bfqd, fmt, args...) \ + pr_crit("%s bfq " fmt "\n", \ + checked_dev_name((bfqd)->queue->backing_dev_info->dev), \ + ##args) + +#else /* CONFIG_BFQ_REDIRECT_TO_CONSOLE */ + +#if !defined(CONFIG_BLK_DEV_IO_TRACE) + +/* Avoid possible "unused-variable" warning. See commit message. */ + +#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) ((void) (bfqq)) + +#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) ((void) (bfqg)) + +#define bfq_log(bfqd, fmt, args...) do {} while (0) + +#else /* CONFIG_BLK_DEV_IO_TRACE */ + +#include + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +static struct bfq_group *bfqq_group(struct bfq_queue *bfqq); +static struct blkcg_gq *bfqg_to_blkg(struct bfq_group *bfqg); + +#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) do { \ + char __pbuf[128]; \ + \ + assert_spin_locked((bfqd)->queue->queue_lock); \ + blkg_path(bfqg_to_blkg(bfqq_group(bfqq)), __pbuf, sizeof(__pbuf)); \ + blk_add_trace_msg((bfqd)->queue, "bfq%d%c %s " fmt, \ + (bfqq)->pid, \ + bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \ + __pbuf, ##args); \ +} while (0) + +#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do { \ + char __pbuf[128]; \ + \ + blkg_path(bfqg_to_blkg(bfqg), __pbuf, sizeof(__pbuf)); \ + blk_add_trace_msg((bfqd)->queue, "%s " fmt, __pbuf, ##args); \ +} while (0) + +#else /* BFQ_GROUP_IOSCHED_ENABLED */ + +#define bfq_log_bfqq(bfqd, bfqq, fmt, args...) \ + blk_add_trace_msg((bfqd)->queue, "bfq%d%c " fmt, (bfqq)->pid, \ + bfq_bfqq_sync((bfqq)) ? 'S' : 'A', \ + ##args) +#define bfq_log_bfqg(bfqd, bfqg, fmt, args...) do {} while (0) + +#endif /* BFQ_GROUP_IOSCHED_ENABLED */ + +#define bfq_log(bfqd, fmt, args...) \ + blk_add_trace_msg((bfqd)->queue, "bfq " fmt, ##args) + +#endif /* CONFIG_BLK_DEV_IO_TRACE */ +#endif /* CONFIG_BFQ_REDIRECT_TO_CONSOLE */ + +/* Expiration reasons. */ +enum bfqq_expiration { + BFQ_BFQQ_TOO_IDLE = 0, /* + * queue has been idling for + * too long + */ + BFQ_BFQQ_BUDGET_TIMEOUT, /* budget took too long to be used */ + BFQ_BFQQ_BUDGET_EXHAUSTED, /* budget consumed */ + BFQ_BFQQ_NO_MORE_REQUESTS, /* the queue has no more requests */ + BFQ_BFQQ_PREEMPTED /* preemption in progress */ +}; + + +struct bfqg_stats { +#if defined(BFQ_GROUP_IOSCHED_ENABLED) && defined(CONFIG_DEBUG_BLK_CGROUP) + /* number of ios merged */ + struct blkg_rwstat merged; + /* total time spent on device in ns, may not be accurate w/ queueing */ + struct blkg_rwstat service_time; + /* total time spent waiting in scheduler queue in ns */ + struct blkg_rwstat wait_time; + /* number of IOs queued up */ + struct blkg_rwstat queued; + /* total disk time and nr sectors dispatched by this group */ + struct blkg_stat time; + /* sum of number of ios queued across all samples */ + struct blkg_stat avg_queue_size_sum; + /* count of samples taken for average */ + struct blkg_stat avg_queue_size_samples; + /* how many times this group has been removed from service tree */ + struct blkg_stat dequeue; + /* total time spent waiting for it to be assigned a timeslice. */ + struct blkg_stat group_wait_time; + /* time spent idling for this blkcg_gq */ + struct blkg_stat idle_time; + /* total time with empty current active q with other requests queued */ + struct blkg_stat empty_time; + /* fields after this shouldn't be cleared on stat reset */ + uint64_t start_group_wait_time; + uint64_t start_idle_time; + uint64_t start_empty_time; + uint16_t flags; +#endif /* BFQ_GROUP_IOSCHED_ENABLED && CONFIG_DEBUG_BLK_CGROUP */ +}; + +#ifdef BFQ_GROUP_IOSCHED_ENABLED +/* + * struct bfq_group_data - per-blkcg storage for the blkio subsystem. + * + * @ps: @blkcg_policy_storage that this structure inherits + * @weight: weight of the bfq_group + */ +struct bfq_group_data { + /* must be the first member */ + struct blkcg_policy_data pd; + + unsigned int weight; +}; + +/** + * struct bfq_group - per (device, cgroup) data structure. + * @entity: schedulable entity to insert into the parent group sched_data. + * @sched_data: own sched_data, to contain child entities (they may be + * both bfq_queues and bfq_groups). + * @bfqd: the bfq_data for the device this group acts upon. + * @async_bfqq: array of async queues for all the tasks belonging to + * the group, one queue per ioprio value per ioprio_class, + * except for the idle class that has only one queue. + * @async_idle_bfqq: async queue for the idle class (ioprio is ignored). + * @my_entity: pointer to @entity, %NULL for the toplevel group; used + * to avoid too many special cases during group creation/ + * migration. + * @active_entities: number of active entities belonging to the group; + * unused for the root group. Used to know whether there + * are groups with more than one active @bfq_entity + * (see the comments to the function + * bfq_bfqq_may_idle()). + * @rq_pos_tree: rbtree sorted by next_request position, used when + * determining if two or more queues have interleaving + * requests (see bfq_find_close_cooperator()). + * + * Each (device, cgroup) pair has its own bfq_group, i.e., for each cgroup + * there is a set of bfq_groups, each one collecting the lower-level + * entities belonging to the group that are acting on the same device. + * + * Locking works as follows: + * o @bfqd is protected by the queue lock, RCU is used to access it + * from the readers. + * o All the other fields are protected by the @bfqd queue lock. + */ +struct bfq_group { + /* must be the first member */ + struct blkg_policy_data pd; + + struct bfq_entity entity; + struct bfq_sched_data sched_data; + + void *bfqd; + + struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR]; + struct bfq_queue *async_idle_bfqq; + + struct bfq_entity *my_entity; + + int active_entities; + + struct rb_root rq_pos_tree; + + struct bfqg_stats stats; +}; + +#else +struct bfq_group { + struct bfq_sched_data sched_data; + + struct bfq_queue *async_bfqq[2][IOPRIO_BE_NR]; + struct bfq_queue *async_idle_bfqq; + + struct rb_root rq_pos_tree; +}; +#endif + +static struct bfq_queue *bfq_entity_to_bfqq(struct bfq_entity *entity); + +static unsigned int bfq_class_idx(struct bfq_entity *entity) +{ + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + + return bfqq ? bfqq->ioprio_class - 1 : + BFQ_DEFAULT_GRP_CLASS - 1; +} + +static struct bfq_service_tree * +bfq_entity_service_tree(struct bfq_entity *entity) +{ + struct bfq_sched_data *sched_data = entity->sched_data; + struct bfq_queue *bfqq = bfq_entity_to_bfqq(entity); + unsigned int idx = bfq_class_idx(entity); + + BUG_ON(idx >= BFQ_IOPRIO_CLASSES); + BUG_ON(sched_data == NULL); + + if (bfqq) + bfq_log_bfqq(bfqq->bfqd, bfqq, + "entity_service_tree %p %d", + sched_data->service_tree + idx, idx); +#ifdef BFQ_GROUP_IOSCHED_ENABLED + else { + struct bfq_group *bfqg = + container_of(entity, struct bfq_group, entity); + + bfq_log_bfqg((struct bfq_data *)bfqg->bfqd, bfqg, + "entity_service_tree %p %d", + sched_data->service_tree + idx, idx); + } +#endif + return sched_data->service_tree + idx; +} + +static struct bfq_queue *bic_to_bfqq(struct bfq_io_cq *bic, bool is_sync) +{ + return bic->bfqq[is_sync]; +} + +static void bic_set_bfqq(struct bfq_io_cq *bic, struct bfq_queue *bfqq, + bool is_sync) +{ + bic->bfqq[is_sync] = bfqq; +} + +static struct bfq_data *bic_to_bfqd(struct bfq_io_cq *bic) +{ + return bic->icq.q->elevator->elevator_data; +} + +#ifdef BFQ_GROUP_IOSCHED_ENABLED + +static struct bfq_group *bfq_bfqq_to_bfqg(struct bfq_queue *bfqq) +{ + struct bfq_entity *group_entity = bfqq->entity.parent; + + if (!group_entity) + group_entity = &bfqq->bfqd->root_group->entity; + + return container_of(group_entity, struct bfq_group, entity); +} + +#else + +static struct bfq_group *bfq_bfqq_to_bfqg(struct bfq_queue *bfqq) +{ + return bfqq->bfqd->root_group; +} + +#endif + +static void bfq_check_ioprio_change(struct bfq_io_cq *bic, struct bio *bio); +static void bfq_put_queue(struct bfq_queue *bfqq); +static void bfq_dispatch_insert(struct request_queue *q, struct request *rq); +static struct bfq_queue *bfq_get_queue(struct bfq_data *bfqd, + struct bio *bio, bool is_sync, + struct bfq_io_cq *bic); +static void bfq_end_wr_async_queues(struct bfq_data *bfqd, + struct bfq_group *bfqg); +#ifdef BFQ_GROUP_IOSCHED_ENABLED +static void bfq_put_async_queues(struct bfq_data *bfqd, struct bfq_group *bfqg); +#endif +static void bfq_exit_bfqq(struct bfq_data *bfqd, struct bfq_queue *bfqq); + +#endif /* _BFQ_H */ diff --git a/block/blk-core.c b/block/blk-core.c index 7b30bf1..34ecc99 100644 --- a/block/blk-core.c +++ b/block/blk-core.c @@ -55,6 +55,8 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(block_unplug); DEFINE_IDA(blk_queue_ida); +int trap_non_toi_io; + /* * For the allocated request tables */ @@ -2255,6 +2257,9 @@ EXPORT_SYMBOL(generic_make_request); */ blk_qc_t submit_bio(struct bio *bio) { + if (unlikely(trap_non_toi_io)) + BUG_ON(!bio_flagged(bio, BIO_TOI)); + /* * If it's a regular read/write or a barrier with data attached, * go through the normal accounting stuff before submission. diff --git a/block/elevator.c b/block/elevator.c index 153926a..34bfaca 100644 --- a/block/elevator.c +++ b/block/elevator.c @@ -229,7 +229,11 @@ int elevator_init(struct request_queue *q, char *name) */ if (q->mq_ops) { if (q->nr_hw_queues == 1) + #if defined(CONFIG_PCK_INTERACTIVE) && defined(CONFIG_IOSCHED_BFQ) + e = elevator_get("bfq", false); + #else e = elevator_get("mq-deadline", false); + #endif if (!e) return 0; } else diff --git a/block/genhd.c b/block/genhd.c index dd305c6..e84edee 100644 --- a/block/genhd.c +++ b/block/genhd.c @@ -18,6 +18,8 @@ #include #include #include +#include +#include #include #include #include @@ -1500,6 +1502,85 @@ int invalidate_partition(struct gendisk *disk, int partno) EXPORT_SYMBOL(invalidate_partition); +dev_t blk_lookup_fs_info(struct fs_info *seek) +{ + dev_t devt = MKDEV(0, 0); + struct class_dev_iter iter; + struct device *dev; + int best_score = 0; + + class_dev_iter_init(&iter, &block_class, NULL, &disk_type); + while (best_score < 3 && (dev = class_dev_iter_next(&iter))) { + struct gendisk *disk = dev_to_disk(dev); + struct disk_part_iter piter; + struct hd_struct *part; + + disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0); + + while (best_score < 3 && (part = disk_part_iter_next(&piter))) { + int score = part_matches_fs_info(part, seek); + if (score > best_score) { + devt = part_devt(part); + best_score = score; + } + } + disk_part_iter_exit(&piter); + } + class_dev_iter_exit(&iter); + return devt; +} + +/* Caller uses NULL, key to start. For each match found, we return a bdev on + * which we have done blkdev_get, and we do the blkdev_put on block devices + * that are passed to us. When no more matches are found, we return NULL. + */ +struct block_device *next_bdev_of_type(struct block_device *last, + const char *key) +{ + dev_t devt = MKDEV(0, 0); + struct class_dev_iter iter; + struct device *dev; + struct block_device *next = NULL, *bdev; + int got_last = 0; + + if (!key) + goto out; + + class_dev_iter_init(&iter, &block_class, NULL, &disk_type); + while (!devt && (dev = class_dev_iter_next(&iter))) { + struct gendisk *disk = dev_to_disk(dev); + struct disk_part_iter piter; + struct hd_struct *part; + + disk_part_iter_init(&piter, disk, DISK_PITER_INCL_PART0); + + while ((part = disk_part_iter_next(&piter))) { + bdev = bdget(part_devt(part)); + if (last && !got_last) { + if (last == bdev) + got_last = 1; + continue; + } + + if (blkdev_get(bdev, FMODE_READ, 0)) + continue; + + if (bdev_matches_key(bdev, key)) { + next = bdev; + break; + } + + blkdev_put(bdev, FMODE_READ); + } + disk_part_iter_exit(&piter); + } + class_dev_iter_exit(&iter); +out: + if (last) + blkdev_put(last, FMODE_READ); + return next; +} + /* * Disk events - monitor disk events like media change and eject request. */ diff --git b/block/uuid.c b/block/uuid.c new file mode 100644 index 0000000..26cedeb --- /dev/null +++ b/block/uuid.c @@ -0,0 +1,510 @@ +#include +#include +#include +#include +#include + +static int debug_enabled; + +#define PRINTK(fmt, args...) do { \ + if (debug_enabled) \ + printk(KERN_DEBUG fmt, ## args); \ + } while(0) + +#define PRINT_HEX_DUMP(v1, v2, v3, v4, v5, v6, v7, v8) \ + do { \ + if (debug_enabled) \ + print_hex_dump(v1, v2, v3, v4, v5, v6, v7, v8); \ + } while(0) + +/* + * Simple UUID translation + */ + +struct uuid_info { + const char *key; + const char *name; + long bkoff; + unsigned sboff; + unsigned sig_len; + const char *magic; + int uuid_offset; + int last_mount_offset; + int last_mount_size; +}; + +/* + * Based on libuuid's blkid_magic array. Note that I don't + * have uuid offsets for all of these yet - mssing ones are 0x0. + * Further information welcome. + * + * Rearranged by page of fs signature for optimisation. + */ +static struct uuid_info uuid_list[] = { + { NULL, "oracleasm", 0, 32, 8, "ORCLDISK", 0x0, 0, 0 }, + { "ntfs", "ntfs", 0, 3, 8, "NTFS ", 0x0, 0, 0 }, + { "vfat", "vfat", 0, 0x52, 5, "MSWIN", 0x0, 0, 0 }, + { "vfat", "vfat", 0, 0x52, 8, "FAT32 ", 0x0, 0, 0 }, + { "vfat", "vfat", 0, 0x36, 5, "MSDOS", 0x0, 0, 0 }, + { "vfat", "vfat", 0, 0x36, 8, "FAT16 ", 0x0, 0, 0 }, + { "vfat", "vfat", 0, 0x36, 8, "FAT12 ", 0x0, 0, 0 }, + { "vfat", "vfat", 0, 0, 1, "\353", 0x0, 0, 0 }, + { "vfat", "vfat", 0, 0, 1, "\351", 0x0, 0, 0 }, + { "vfat", "vfat", 0, 0x1fe, 2, "\125\252", 0x0, 0, 0 }, + { "xfs", "xfs", 0, 0, 4, "XFSB", 0x20, 0, 0 }, + { "romfs", "romfs", 0, 0, 8, "-rom1fs-", 0x0, 0, 0 }, + { "bfs", "bfs", 0, 0, 4, "\316\372\173\033", 0, 0, 0 }, + { "cramfs", "cramfs", 0, 0, 4, "E=\315\050", 0x0, 0, 0 }, + { "qnx4", "qnx4", 0, 4, 6, "QNX4FS", 0, 0, 0 }, + { NULL, "crypt_LUKS", 0, 0, 6, "LUKS\xba\xbe", 0x0, 0, 0 }, + { "squashfs", "squashfs", 0, 0, 4, "sqsh", 0, 0, 0 }, + { "squashfs", "squashfs", 0, 0, 4, "hsqs", 0, 0, 0 }, + { "ocfs", "ocfs", 0, 8, 9, "OracleCFS", 0x0, 0, 0 }, + { "lvm2pv", "lvm2pv", 0, 0x018, 8, "LVM2 001", 0x0, 0, 0 }, + { "sysv", "sysv", 0, 0x3f8, 4, "\020~\030\375", 0, 0, 0 }, + { "ext", "ext", 1, 0x38, 2, "\123\357", 0x468, 0x42c, 4 }, + { "minix", "minix", 1, 0x10, 2, "\177\023", 0, 0, 0 }, + { "minix", "minix", 1, 0x10, 2, "\217\023", 0, 0, 0 }, + { "minix", "minix", 1, 0x10, 2, "\150\044", 0, 0, 0 }, + { "minix", "minix", 1, 0x10, 2, "\170\044", 0, 0, 0 }, + { "lvm2pv", "lvm2pv", 1, 0x018, 8, "LVM2 001", 0x0, 0, 0 }, + { "vxfs", "vxfs", 1, 0, 4, "\365\374\001\245", 0, 0, 0 }, + { "hfsplus", "hfsplus", 1, 0, 2, "BD", 0x0, 0, 0 }, + { "hfsplus", "hfsplus", 1, 0, 2, "H+", 0x0, 0, 0 }, + { "hfsplus", "hfsplus", 1, 0, 2, "HX", 0x0, 0, 0 }, + { "hfs", "hfs", 1, 0, 2, "BD", 0x0, 0, 0 }, + { "ocfs2", "ocfs2", 1, 0, 6, "OCFSV2", 0x0, 0, 0 }, + { "lvm2pv", "lvm2pv", 0, 0x218, 8, "LVM2 001", 0x0, 0, 0 }, + { "lvm2pv", "lvm2pv", 1, 0x218, 8, "LVM2 001", 0x0, 0, 0 }, + { "ocfs2", "ocfs2", 2, 0, 6, "OCFSV2", 0x0, 0, 0 }, + { "swap", "swap", 0, 0xff6, 10, "SWAP-SPACE", 0x40c, 0, 0 }, + { "swap", "swap", 0, 0xff6, 10, "SWAPSPACE2", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0xff6, 9, "S1SUSPEND", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0xff6, 9, "S2SUSPEND", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0xff6, 9, "ULSUSPEND", 0x40c, 0, 0 }, + { "ocfs2", "ocfs2", 4, 0, 6, "OCFSV2", 0x0, 0, 0 }, + { "ocfs2", "ocfs2", 8, 0, 6, "OCFSV2", 0x0, 0, 0 }, + { "hpfs", "hpfs", 8, 0, 4, "I\350\225\371", 0, 0, 0 }, + { "reiserfs", "reiserfs", 8, 0x34, 8, "ReIsErFs", 0x10054, 0, 0 }, + { "reiserfs", "reiserfs", 8, 20, 8, "ReIsErFs", 0x10054, 0, 0 }, + { "zfs", "zfs", 8, 0, 8, "\0\0\x02\xf5\xb0\x07\xb1\x0c", 0x0, 0, 0 }, + { "zfs", "zfs", 8, 0, 8, "\x0c\xb1\x07\xb0\xf5\x02\0\0", 0x0, 0, 0 }, + { "ufs", "ufs", 8, 0x55c, 4, "T\031\001\000", 0, 0, 0 }, + { "swap", "swap", 0, 0x1ff6, 10, "SWAP-SPACE", 0x40c, 0, 0 }, + { "swap", "swap", 0, 0x1ff6, 10, "SWAPSPACE2", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0x1ff6, 9, "S1SUSPEND", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0x1ff6, 9, "S2SUSPEND", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0x1ff6, 9, "ULSUSPEND", 0x40c, 0, 0 }, + { "reiserfs", "reiserfs", 64, 0x34, 9, "ReIsEr2Fs", 0x10054, 0, 0 }, + { "reiserfs", "reiserfs", 64, 0x34, 9, "ReIsEr3Fs", 0x10054, 0, 0 }, + { "reiserfs", "reiserfs", 64, 0x34, 8, "ReIsErFs", 0x10054, 0, 0 }, + { "reiser4", "reiser4", 64, 0, 7, "ReIsEr4", 0x100544, 0, 0 }, + { "gfs2", "gfs2", 64, 0, 4, "\x01\x16\x19\x70", 0x0, 0, 0 }, + { "gfs", "gfs", 64, 0, 4, "\x01\x16\x19\x70", 0x0, 0, 0 }, + { "btrfs", "btrfs", 64, 0x40, 8, "_BHRfS_M", 0x0, 0, 0 }, + { "swap", "swap", 0, 0x3ff6, 10, "SWAP-SPACE", 0x40c, 0, 0 }, + { "swap", "swap", 0, 0x3ff6, 10, "SWAPSPACE2", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0x3ff6, 9, "S1SUSPEND", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0x3ff6, 9, "S2SUSPEND", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0x3ff6, 9, "ULSUSPEND", 0x40c, 0, 0 }, + { "udf", "udf", 32, 1, 5, "BEA01", 0x0, 0, 0 }, + { "udf", "udf", 32, 1, 5, "BOOT2", 0x0, 0, 0 }, + { "udf", "udf", 32, 1, 5, "CD001", 0x0, 0, 0 }, + { "udf", "udf", 32, 1, 5, "CDW02", 0x0, 0, 0 }, + { "udf", "udf", 32, 1, 5, "NSR02", 0x0, 0, 0 }, + { "udf", "udf", 32, 1, 5, "NSR03", 0x0, 0, 0 }, + { "udf", "udf", 32, 1, 5, "TEA01", 0x0, 0, 0 }, + { "iso9660", "iso9660", 32, 1, 5, "CD001", 0x0, 0, 0 }, + { "iso9660", "iso9660", 32, 9, 5, "CDROM", 0x0, 0, 0 }, + { "jfs", "jfs", 32, 0, 4, "JFS1", 0x88, 0, 0 }, + { "swap", "swap", 0, 0x7ff6, 10, "SWAP-SPACE", 0x40c, 0, 0 }, + { "swap", "swap", 0, 0x7ff6, 10, "SWAPSPACE2", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0x7ff6, 9, "S1SUSPEND", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0x7ff6, 9, "S2SUSPEND", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0x7ff6, 9, "ULSUSPEND", 0x40c, 0, 0 }, + { "swap", "swap", 0, 0xfff6, 10, "SWAP-SPACE", 0x40c, 0, 0 }, + { "swap", "swap", 0, 0xfff6, 10, "SWAPSPACE2", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0xfff6, 9, "S1SUSPEND", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0xfff6, 9, "S2SUSPEND", 0x40c, 0, 0 }, + { "swap", "swsuspend", 0, 0xfff6, 9, "ULSUSPEND", 0x40c, 0, 0 }, + { "zfs", "zfs", 264, 0, 8, "\0\0\x02\xf5\xb0\x07\xb1\x0c", 0x0, 0, 0 }, + { "zfs", "zfs", 264, 0, 8, "\x0c\xb1\x07\xb0\xf5\x02\0\0", 0x0, 0, 0 }, + { NULL, NULL, 0, 0, 0, NULL, 0x0, 0, 0 } +}; + +static int null_uuid(const char *uuid) +{ + int i; + + for (i = 0; i < 16 && !uuid[i]; i++); + + return (i == 16); +} + + +static void uuid_end_bio(struct bio *bio) +{ + struct page *page = bio->bi_io_vec[0].bv_page; + + if (bio->bi_status == BLK_STS_IOERR) + SetPageError(page); + + unlock_page(page); + bio_put(bio); +} + + +/** + * read_bdev_page - Read a page from a device. + * @dev: The block device we're using. + * @page_num: The page we're reading. + * + * Based on Patrick Mochell's pmdisk code from long ago: "Straight from the + * textbook - allocate and initialize the bio. If we're writing, make sure + * the page is marked as dirty. Then submit it and carry on." + **/ +static struct page *read_bdev_page(struct block_device *dev, int page_num) +{ + struct bio *bio = NULL; + struct page *page = alloc_page(GFP_NOFS | __GFP_HIGHMEM); + + if (!page) { + printk(KERN_ERR "Failed to allocate a page for reading data " + "in UUID checks."); + return NULL; + } + + bio = bio_alloc(GFP_NOFS, 1); + bio_set_dev(bio, dev); + bio->bi_iter.bi_sector = page_num << 3; + bio->bi_end_io = uuid_end_bio; + bio->bi_flags |= (1 << BIO_TOI); + + PRINTK("Submitting bio on device %lx, page %d using bio %p and page %p.\n", + (unsigned long) dev->bd_dev, page_num, bio, page); + + if (bio_add_page(bio, page, PAGE_SIZE, 0) < PAGE_SIZE) { + printk(KERN_DEBUG "ERROR: adding page to bio at %d\n", + page_num); + bio_put(bio); + __free_page(page); + printk(KERN_DEBUG "read_bdev_page freed page %p (in error " + "path).\n", page); + return NULL; + } + + lock_page(page); + bio_set_op_attrs(bio, REQ_OP_READ, REQ_SYNC); + submit_bio(bio); + + wait_on_page_locked(page); + if (PageError(page)) { + __free_page(page); + page = NULL; + } + return page; +} + +int bdev_matches_key(struct block_device *bdev, const char *key) +{ + unsigned char *data = NULL; + struct page *data_page = NULL; + + int dev_offset, pg_num, pg_off, i; + int last_pg_num = -1; + int result = 0; + char buf[50]; + + if (null_uuid(key)) { + PRINTK("Refusing to find a NULL key.\n"); + return 0; + } + + if (!bdev->bd_disk) { + bdevname(bdev, buf); + PRINTK("bdev %s has no bd_disk.\n", buf); + return 0; + } + + if (!bdev->bd_disk->queue) { + bdevname(bdev, buf); + PRINTK("bdev %s has no queue.\n", buf); + return 0; + } + + for (i = 0; uuid_list[i].name; i++) { + struct uuid_info *dat = &uuid_list[i]; + + if (!dat->key || strcmp(dat->key, key)) + continue; + + dev_offset = (dat->bkoff << 10) + dat->sboff; + pg_num = dev_offset >> 12; + pg_off = dev_offset & 0xfff; + + if ((((pg_num + 1) << 3) - 1) > bdev->bd_part->nr_sects >> 1) + continue; + + if (pg_num != last_pg_num) { + if (data_page) { + kunmap(data_page); + __free_page(data_page); + } + data_page = read_bdev_page(bdev, pg_num); + if (!data_page) + continue; + data = kmap(data_page); + } + + last_pg_num = pg_num; + + if (strncmp(&data[pg_off], dat->magic, dat->sig_len)) + continue; + + result = 1; + break; + } + + if (data_page) { + kunmap(data_page); + __free_page(data_page); + } + + return result; +} + +/* + * part_matches_fs_info - Does the given partition match the details given? + * + * Returns a score saying how good the match is. + * 0 = no UUID match. + * 1 = UUID but last mount time differs. + * 2 = UUID, last mount time but not dev_t + * 3 = perfect match + * + * This lets us cope elegantly with probing resulting in dev_ts changing + * from boot to boot, and with the case where a user copies a partition + * (UUID is non unique), and we need to check the last mount time of the + * correct partition. + */ +int part_matches_fs_info(struct hd_struct *part, struct fs_info *seek) +{ + struct block_device *bdev; + struct fs_info *got; + int result = 0; + char buf[50]; + + if (null_uuid((char *) &seek->uuid)) { + PRINTK("Refusing to find a NULL uuid.\n"); + return 0; + } + + bdev = bdget(part_devt(part)); + + PRINTK("part_matches fs info considering %x.\n", part_devt(part)); + + if (blkdev_get(bdev, FMODE_READ, 0)) { + PRINTK("blkdev_get failed.\n"); + return 0; + } + + if (!bdev->bd_disk) { + bdevname(bdev, buf); + PRINTK("bdev %s has no bd_disk.\n", buf); + goto out; + } + + if (!bdev->bd_disk->queue) { + bdevname(bdev, buf); + PRINTK("bdev %s has no queue.\n", buf); + goto out; + } + + got = fs_info_from_block_dev(bdev); + + if (got && !memcmp(got->uuid, seek->uuid, 16)) { + PRINTK(" Have matching UUID.\n"); + PRINTK(" Got: LMS %d, LM %p.\n", got->last_mount_size, got->last_mount); + PRINTK(" Seek: LMS %d, LM %p.\n", seek->last_mount_size, seek->last_mount); + result = 1; + + if (got->last_mount_size == seek->last_mount_size && + got->last_mount && seek->last_mount && + !memcmp(got->last_mount, seek->last_mount, + got->last_mount_size)) { + result = 2; + + PRINTK(" Matching last mount time.\n"); + + if (part_devt(part) == seek->dev_t) { + result = 3; + PRINTK(" Matching dev_t.\n"); + } else + PRINTK("Dev_ts differ (%x vs %x).\n", part_devt(part), seek->dev_t); + } + } + + PRINTK(" Score for %x is %d.\n", part_devt(part), result); + free_fs_info(got); +out: + blkdev_put(bdev, FMODE_READ); + return result; +} + +void free_fs_info(struct fs_info *fs_info) +{ + if (!fs_info || IS_ERR(fs_info)) + return; + + if (fs_info->last_mount) + kfree(fs_info->last_mount); + + kfree(fs_info); +} + +struct fs_info *fs_info_from_block_dev(struct block_device *bdev) +{ + unsigned char *data = NULL; + struct page *data_page = NULL; + + int dev_offset, pg_num, pg_off; + int uuid_pg_num, uuid_pg_off, i; + unsigned char *uuid_data = NULL; + struct page *uuid_data_page = NULL; + + int last_pg_num = -1, last_uuid_pg_num = 0; + char buf[50]; + struct fs_info *fs_info = NULL; + + bdevname(bdev, buf); + + PRINTK("uuid_from_block_dev looking for partition type of %s.\n", buf); + + for (i = 0; uuid_list[i].name; i++) { + struct uuid_info *dat = &uuid_list[i]; + dev_offset = (dat->bkoff << 10) + dat->sboff; + pg_num = dev_offset >> 12; + pg_off = dev_offset & 0xfff; + uuid_pg_num = dat->uuid_offset >> 12; + uuid_pg_off = dat->uuid_offset & 0xfff; + + if ((((pg_num + 1) << 3) - 1) > bdev->bd_part->nr_sects >> 1) + continue; + + /* Ignore partition types with no UUID offset */ + if (!dat->uuid_offset) + continue; + + if (pg_num != last_pg_num) { + if (data_page) { + kunmap(data_page); + __free_page(data_page); + } + data_page = read_bdev_page(bdev, pg_num); + if (!data_page) + continue; + data = kmap(data_page); + } + + last_pg_num = pg_num; + + if (strncmp(&data[pg_off], dat->magic, dat->sig_len)) + continue; + + PRINTK("This partition looks like %s.\n", dat->name); + + fs_info = kzalloc(sizeof(struct fs_info), GFP_KERNEL); + + if (!fs_info) { + PRINTK("Failed to allocate fs_info struct."); + fs_info = ERR_PTR(-ENOMEM); + break; + } + + /* UUID can't be off the end of the disk */ + if ((uuid_pg_num > bdev->bd_part->nr_sects >> 3) || + !dat->uuid_offset) + goto no_uuid; + + if (!uuid_data || uuid_pg_num != last_uuid_pg_num) { + /* No need to reread the page from above */ + if (uuid_pg_num == pg_num && uuid_data) + memcpy(uuid_data, data, PAGE_SIZE); + else { + if (uuid_data_page) { + kunmap(uuid_data_page); + __free_page(uuid_data_page); + } + uuid_data_page = read_bdev_page(bdev, uuid_pg_num); + if (!uuid_data_page) + continue; + uuid_data = kmap(uuid_data_page); + } + } + + last_uuid_pg_num = uuid_pg_num; + memcpy(&fs_info->uuid, &uuid_data[uuid_pg_off], 16); + fs_info->dev_t = bdev->bd_dev; + +no_uuid: + PRINT_HEX_DUMP(KERN_EMERG, "fs_info_from_block_dev " + "returning uuid ", DUMP_PREFIX_NONE, 16, 1, + fs_info->uuid, 16, 0); + + if (dat->last_mount_size) { + int pg = dat->last_mount_offset >> 12, sz; + int off = dat->last_mount_offset & 0xfff; + struct page *last_mount = read_bdev_page(bdev, pg); + unsigned char *last_mount_data; + char *ptr; + + if (!last_mount) { + fs_info = ERR_PTR(-ENOMEM); + break; + } + last_mount_data = kmap(last_mount); + sz = dat->last_mount_size; + ptr = kmalloc(sz, GFP_KERNEL); + + if (!ptr) { + printk(KERN_EMERG "fs_info_from_block_dev " + "failed to get memory for last mount " + "timestamp."); + free_fs_info(fs_info); + fs_info = ERR_PTR(-ENOMEM); + } else { + fs_info->last_mount = ptr; + fs_info->last_mount_size = sz; + memcpy(ptr, &last_mount_data[off], sz); + } + + kunmap(last_mount); + __free_page(last_mount); + } + break; + } + + if (data_page) { + kunmap(data_page); + __free_page(data_page); + } + + if (uuid_data_page) { + kunmap(uuid_data_page); + __free_page(uuid_data_page); + } + + return fs_info; +} + +static int __init uuid_debug_setup(char *str) +{ + int value; + + if (sscanf(str, "=%d", &value)) + debug_enabled = value; + + return 1; +} + +__setup("uuid_debug", uuid_debug_setup); diff --git a/drivers/block/loop.c b/drivers/block/loop.c index 85de673..d44de9d 100644 --- a/drivers/block/loop.c +++ b/drivers/block/loop.c @@ -686,6 +686,24 @@ static inline int is_loop_device(struct file *file) return i && S_ISBLK(i->i_mode) && MAJOR(i->i_rdev) == LOOP_MAJOR; } +/* + * for AUFS + * no get/put for file. + */ +struct file *loop_backing_file(struct super_block *sb) +{ + struct file *ret; + struct loop_device *l; + + ret = NULL; + if (MAJOR(sb->s_dev) == LOOP_MAJOR) { + l = sb->s_bdev->bd_disk->private_data; + ret = l->lo_backing_file; + } + return ret; +} +EXPORT_SYMBOL_GPL(loop_backing_file); + /* loop sysfs attributes */ static ssize_t loop_attr_show(struct device *dev, char *page, diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c index 6b423ee..4b4932e 100644 --- a/drivers/cpufreq/cpufreq_ondemand.c +++ b/drivers/cpufreq/cpufreq_ondemand.c @@ -21,10 +21,18 @@ #include "cpufreq_ondemand.h" /* On-demand governor macros */ -#define DEF_FREQUENCY_UP_THRESHOLD (80) +#if defined(CONFIG_PCK_INTERACTIVE) && defined(CONFIG_SCHED_MUQSS) + #define DEF_FREQUENCY_UP_THRESHOLD (45) + #define MICRO_FREQUENCY_UP_THRESHOLD (45) +#elif defined(CONFIG_PCK_INTERACTIVE) + #define DEF_FREQUENCY_UP_THRESHOLD (80) + #define MICRO_FREQUENCY_UP_THRESHOLD (85) +#else + #define DEF_FREQUENCY_UP_THRESHOLD (80) + #define MICRO_FREQUENCY_UP_THRESHOLD (95) +#endif #define DEF_SAMPLING_DOWN_FACTOR (1) #define MAX_SAMPLING_DOWN_FACTOR (100000) -#define MICRO_FREQUENCY_UP_THRESHOLD (95) #define MICRO_FREQUENCY_MIN_SAMPLE_RATE (10000) #define MIN_FREQUENCY_UP_THRESHOLD (1) #define MAX_FREQUENCY_UP_THRESHOLD (100) diff --git a/drivers/input/mouse/synaptics.c b/drivers/input/mouse/synaptics.c index ee5466a..4fce7aa 100644 --- a/drivers/input/mouse/synaptics.c +++ b/drivers/input/mouse/synaptics.c @@ -1315,7 +1315,9 @@ static void set_input_params(struct psmouse *psmouse, /* Clickpads report only left button */ __clear_bit(BTN_RIGHT, dev->keybit); __clear_bit(BTN_MIDDLE, dev->keybit); - } + } else if (SYN_CAP_CLICKPAD2BTN(info->ext_cap_0c) || + SYN_CAP_CLICKPAD2BTN2(info->ext_cap_0c)) + __set_bit(INPUT_PROP_BUTTONPAD, dev->propbit); } static ssize_t synaptics_show_disable_gesture(struct psmouse *psmouse, diff --git a/drivers/input/mouse/synaptics.h b/drivers/input/mouse/synaptics.h index fc00e00..4cfbeec 100644 --- a/drivers/input/mouse/synaptics.h +++ b/drivers/input/mouse/synaptics.h @@ -86,6 +86,7 @@ */ #define SYN_CAP_CLICKPAD(ex0c) ((ex0c) & BIT(20)) /* 1-button ClickPad */ #define SYN_CAP_CLICKPAD2BTN(ex0c) ((ex0c) & BIT(8)) /* 2-button ClickPad */ +#define SYN_CAP_CLICKPAD2BTN2(ex0c) ((ex0c) & BIT(21)) /* 2-button ClickPad */ #define SYN_CAP_MAX_DIMENSIONS(ex0c) ((ex0c) & BIT(17)) #define SYN_CAP_MIN_DIMENSIONS(ex0c) ((ex0c) & BIT(13)) #define SYN_CAP_ADV_GESTURE(ex0c) ((ex0c) & BIT(19)) diff --git a/drivers/macintosh/Kconfig b/drivers/macintosh/Kconfig index 97a420c..c8621e9 100644 --- a/drivers/macintosh/Kconfig +++ b/drivers/macintosh/Kconfig @@ -159,6 +159,13 @@ config INPUT_ADBHID If unsure, say Y. +config ADB_TRACKPAD_ABSOLUTE + bool "Enable absolute mode for adb trackpads" + depends on INPUT_ADBHID + help + Enable absolute mode in adb-base trackpads. This feature adds + compatibility with synaptics Xorg / Xfree drivers. + config MAC_EMUMOUSEBTN tristate "Support for mouse button 2+3 emulation" depends on SYSCTL && INPUT diff --git a/drivers/macintosh/adbhid.c b/drivers/macintosh/adbhid.c index e091193..19b9a36 100644 --- a/drivers/macintosh/adbhid.c +++ b/drivers/macintosh/adbhid.c @@ -262,6 +262,15 @@ static struct adb_ids buttons_ids; #define ADBMOUSE_MS_A3 8 /* Mouse systems A3 trackball (handler 3) */ #define ADBMOUSE_MACALLY2 9 /* MacAlly 2-button mouse */ +#ifdef CONFIG_ADB_TRACKPAD_ABSOLUTE +#define ABS_XMIN 310 +#define ABS_XMAX 1700 +#define ABS_YMIN 200 +#define ABS_YMAX 1000 +#define ABS_ZMIN 0 +#define ABS_ZMAX 55 +#endif + static void adbhid_keyboard_input(unsigned char *data, int nb, int apoll) { @@ -406,6 +415,9 @@ static void adbhid_mouse_input(unsigned char *data, int nb, int autopoll) { int id = (data[0] >> 4) & 0x0f; +#ifdef CONFIG_ADB_TRACKPAD_ABSOLUTE + int btn = 0; int x_axis = 0; int y_axis = 0; int z_axis = 0; +#endif if (!adbhid[id]) { printk(KERN_ERR "ADB HID on ID %d not yet registered\n", id); @@ -437,6 +449,17 @@ adbhid_mouse_input(unsigned char *data, int nb, int autopoll) high bits of y-axis motion. XY is additional high bits of x-axis motion. + For ADB Absolute motion protocol the data array will contain the + following values: + + BITS COMMENTS + data[0] = dddd 1100 ADB command: Talk, register 0, for device dddd. + data[1] = byyy yyyy Left button and y-axis motion. + data[2] = bxxx xxxx Second button and x-axis motion. + data[3] = 1yyy 1xxx Half bits of y-axis and x-axis motion. + data[4] = 1yyy 1xxx Higher bits of y-axis and x-axis motion. + data[5] = 1zzz 1zzz Higher and lower bits of z-pressure. + MacAlly 2-button mouse protocol. For MacAlly 2-button mouse protocol the data array will contain the @@ -459,8 +482,17 @@ adbhid_mouse_input(unsigned char *data, int nb, int autopoll) switch (adbhid[id]->mouse_kind) { case ADBMOUSE_TRACKPAD: +#ifdef CONFIG_ADB_TRACKPAD_ABSOLUTE + x_axis = (data[2] & 0x7f) | ((data[3] & 0x07) << 7) | + ((data[4] & 0x07) << 10); + y_axis = (data[1] & 0x7f) | ((data[3] & 0x70) << 3) | + ((data[4] & 0x70) << 6); + z_axis = (data[5] & 0x07) | ((data[5] & 0x70) >> 1); + btn = (!(data[1] >> 7)) & 1; +#else data[1] = (data[1] & 0x7f) | ((data[1] & data[2]) & 0x80); data[2] = data[2] | 0x80; +#endif break; case ADBMOUSE_MICROSPEED: data[1] = (data[1] & 0x7f) | ((data[3] & 0x01) << 7); @@ -486,17 +518,39 @@ adbhid_mouse_input(unsigned char *data, int nb, int autopoll) break; } - input_report_key(adbhid[id]->input, BTN_LEFT, !((data[1] >> 7) & 1)); - input_report_key(adbhid[id]->input, BTN_MIDDLE, !((data[2] >> 7) & 1)); +#ifdef CONFIG_ADB_TRACKPAD_ABSOLUTE + if ( adbhid[id]->mouse_kind == ADBMOUSE_TRACKPAD ) { - if (nb >= 4 && adbhid[id]->mouse_kind != ADBMOUSE_TRACKPAD) - input_report_key(adbhid[id]->input, BTN_RIGHT, !((data[3] >> 7) & 1)); + if(z_axis > 30) input_report_key(adbhid[id]->input, BTN_TOUCH, 1); + if(z_axis < 25) input_report_key(adbhid[id]->input, BTN_TOUCH, 0); - input_report_rel(adbhid[id]->input, REL_X, - ((data[2]&0x7f) < 64 ? (data[2]&0x7f) : (data[2]&0x7f)-128 )); - input_report_rel(adbhid[id]->input, REL_Y, - ((data[1]&0x7f) < 64 ? (data[1]&0x7f) : (data[1]&0x7f)-128 )); + if(z_axis > 0){ + input_report_abs(adbhid[id]->input, ABS_X, x_axis); + input_report_abs(adbhid[id]->input, ABS_Y, y_axis); + input_report_key(adbhid[id]->input, BTN_TOOL_FINGER, 1); + input_report_key(adbhid[id]->input, ABS_TOOL_WIDTH, 5); + } else { + input_report_key(adbhid[id]->input, BTN_TOOL_FINGER, 0); + input_report_key(adbhid[id]->input, ABS_TOOL_WIDTH, 0); + } + + input_report_abs(adbhid[id]->input, ABS_PRESSURE, z_axis); + input_report_key(adbhid[id]->input, BTN_LEFT, btn); + } else { +#endif + input_report_key(adbhid[id]->input, BTN_LEFT, !((data[1] >> 7) & 1)); + input_report_key(adbhid[id]->input, BTN_MIDDLE, !((data[2] >> 7) & 1)); + + if (nb >= 4 && adbhid[id]->mouse_kind != ADBMOUSE_TRACKPAD) + input_report_key(adbhid[id]->input, BTN_RIGHT, !((data[3] >> 7) & 1)); + input_report_rel(adbhid[id]->input, REL_X, + ((data[2]&0x7f) < 64 ? (data[2]&0x7f) : (data[2]&0x7f)-128 )); + input_report_rel(adbhid[id]->input, REL_Y, + ((data[1]&0x7f) < 64 ? (data[1]&0x7f) : (data[1]&0x7f)-128 )); +#ifdef CONFIG_ADB_TRACKPAD_ABSOLUTE + } +#endif input_sync(adbhid[id]->input); } @@ -850,6 +904,15 @@ adbhid_input_register(int id, int default_id, int original_handler_id, input_dev->keybit[BIT_WORD(BTN_MOUSE)] = BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_MIDDLE) | BIT_MASK(BTN_RIGHT); input_dev->relbit[0] = BIT_MASK(REL_X) | BIT_MASK(REL_Y); +#ifdef CONFIG_ADB_TRACKPAD_ABSOLUTE + set_bit(EV_ABS, input_dev->evbit); + input_set_abs_params(input_dev, ABS_X, ABS_XMIN, ABS_XMAX, 0, 0); + input_set_abs_params(input_dev, ABS_Y, ABS_YMIN, ABS_YMAX, 0, 0); + input_set_abs_params(input_dev, ABS_PRESSURE, ABS_ZMIN, ABS_ZMAX, 0, 0); + set_bit(BTN_TOUCH, input_dev->keybit); + set_bit(BTN_TOOL_FINGER, input_dev->keybit); + set_bit(ABS_TOOL_WIDTH, input_dev->absbit); +#endif break; case ADB_MISC: @@ -1133,7 +1196,11 @@ init_trackpad(int id) r1_buffer[3], r1_buffer[4], r1_buffer[5], +#ifdef CONFIG_ADB_TRACKPAD_ABSOLUTE + 0x00, /* Enable absolute mode */ +#else 0x03, /*r1_buffer[6],*/ +#endif r1_buffer[7]); /* Without this flush, the trackpad may be locked up */ diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c index d6d4ed7..31277d3 100644 --- a/drivers/net/ethernet/intel/e1000e/ich8lan.c +++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c @@ -1367,6 +1367,9 @@ out: * Checks to see of the link status of the hardware has changed. If a * change in link status has been detected, then we read the PHY registers * to get the current speed/duplex if link exists. + * + * Returns a negative error code (-E1000_ERR_*) or 0 (link down) or 1 (link + * up). **/ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) { @@ -1382,7 +1385,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) * Change or Rx Sequence Error interrupt. */ if (!mac->get_link_status) - return 0; + return 1; /* First we want to see if the MII Status Register reports * link. If so, then we want to get the current speed/duplex @@ -1613,10 +1616,12 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw) * different link partner. */ ret_val = e1000e_config_fc_after_link_up(hw); - if (ret_val) + if (ret_val) { e_dbg("Error configuring flow control\n"); + return ret_val; + } - return ret_val; + return 1; } static s32 e1000_get_variants_ich8lan(struct e1000_adapter *adapter) diff --git a/drivers/platform/x86/Kconfig b/drivers/platform/x86/Kconfig index 80b8795..6685d38 100644 --- a/drivers/platform/x86/Kconfig +++ b/drivers/platform/x86/Kconfig @@ -511,9 +511,28 @@ config THINKPAD_ACPI_HOTKEY_POLL If you are not sure, say Y here. The driver enables polling only if it is strictly necessary to do so. +config THINKPAD_EC + tristate + ---help--- + This is a low-level driver for accessing the ThinkPad H8S embedded + controller over the LPC bus (not to be confused with the ACPI Embedded + Controller interface). + +config TP_SMAPI + tristate "ThinkPad SMAPI Support" + select THINKPAD_EC + default n + help + This adds SMAPI support on Lenovo/IBM ThinkPads, for features such + as battery charging control. For more information about this driver + see . + + If you have a Lenovo/IBM ThinkPad laptop, say Y or M here. + config SENSORS_HDAPS tristate "Thinkpad Hard Drive Active Protection System (hdaps)" depends on INPUT + select THINKPAD_EC select INPUT_POLLDEV default n help diff --git a/drivers/platform/x86/Makefile b/drivers/platform/x86/Makefile index f9e3ae6..8a95be6 100644 --- a/drivers/platform/x86/Makefile +++ b/drivers/platform/x86/Makefile @@ -29,6 +29,8 @@ obj-$(CONFIG_TC1100_WMI) += tc1100-wmi.o obj-$(CONFIG_SONY_LAPTOP) += sony-laptop.o obj-$(CONFIG_IDEAPAD_LAPTOP) += ideapad-laptop.o obj-$(CONFIG_THINKPAD_ACPI) += thinkpad_acpi.o +obj-$(CONFIG_THINKPAD_EC) += thinkpad_ec.o +obj-$(CONFIG_TP_SMAPI) += tp_smapi.o obj-$(CONFIG_SENSORS_HDAPS) += hdaps.o obj-$(CONFIG_FUJITSU_LAPTOP) += fujitsu-laptop.o obj-$(CONFIG_FUJITSU_TABLET) += fujitsu-tablet.o diff --git a/drivers/platform/x86/hdaps.c b/drivers/platform/x86/hdaps.c index c26baf7..0c1974b 100644 --- a/drivers/platform/x86/hdaps.c +++ b/drivers/platform/x86/hdaps.c @@ -2,7 +2,7 @@ * hdaps.c - driver for IBM's Hard Drive Active Protection System * * Copyright (C) 2005 Robert Love - * Copyright (C) 2005 Jesper Juhl + * Copyright (C) 2005 Jesper Juhl * * The HardDisk Active Protection System (hdaps) is present in IBM ThinkPads * starting with the R40, T41, and X40. It provides a basic two-axis @@ -30,266 +30,384 @@ #include #include -#include +#include #include -#include #include #include #include #include -#include - -#define HDAPS_LOW_PORT 0x1600 /* first port used by hdaps */ -#define HDAPS_NR_PORTS 0x30 /* number of ports: 0x1600 - 0x162f */ - -#define HDAPS_PORT_STATE 0x1611 /* device state */ -#define HDAPS_PORT_YPOS 0x1612 /* y-axis position */ -#define HDAPS_PORT_XPOS 0x1614 /* x-axis position */ -#define HDAPS_PORT_TEMP1 0x1616 /* device temperature, in Celsius */ -#define HDAPS_PORT_YVAR 0x1617 /* y-axis variance (what is this?) */ -#define HDAPS_PORT_XVAR 0x1619 /* x-axis variance (what is this?) */ -#define HDAPS_PORT_TEMP2 0x161b /* device temperature (again?) */ -#define HDAPS_PORT_UNKNOWN 0x161c /* what is this? */ -#define HDAPS_PORT_KMACT 0x161d /* keyboard or mouse activity */ - -#define STATE_FRESH 0x50 /* accelerometer data is fresh */ +#include +#include +#include + +/* Embedded controller accelerometer read command and its result: */ +static const struct thinkpad_ec_row ec_accel_args = + { .mask = 0x0001, .val = {0x11} }; +#define EC_ACCEL_IDX_READOUTS 0x1 /* readouts included in this read */ + /* First readout, if READOUTS>=1: */ +#define EC_ACCEL_IDX_YPOS1 0x2 /* y-axis position word */ +#define EC_ACCEL_IDX_XPOS1 0x4 /* x-axis position word */ +#define EC_ACCEL_IDX_TEMP1 0x6 /* device temperature in Celsius */ + /* Second readout, if READOUTS>=2: */ +#define EC_ACCEL_IDX_XPOS2 0x7 /* y-axis position word */ +#define EC_ACCEL_IDX_YPOS2 0x9 /* x-axis position word */ +#define EC_ACCEL_IDX_TEMP2 0xb /* device temperature in Celsius */ +#define EC_ACCEL_IDX_QUEUED 0xc /* Number of queued readouts left */ +#define EC_ACCEL_IDX_KMACT 0xd /* keyboard or mouse activity */ +#define EC_ACCEL_IDX_RETVAL 0xf /* command return value, good=0x00 */ #define KEYBD_MASK 0x20 /* set if keyboard activity */ #define MOUSE_MASK 0x40 /* set if mouse activity */ -#define KEYBD_ISSET(n) (!! (n & KEYBD_MASK)) /* keyboard used? */ -#define MOUSE_ISSET(n) (!! (n & MOUSE_MASK)) /* mouse used? */ -#define INIT_TIMEOUT_MSECS 4000 /* wait up to 4s for device init ... */ -#define INIT_WAIT_MSECS 200 /* ... in 200ms increments */ +#define READ_TIMEOUT_MSECS 100 /* wait this long for device read */ +#define RETRY_MSECS 3 /* retry delay */ -#define HDAPS_POLL_INTERVAL 50 /* poll for input every 1/20s (50 ms)*/ #define HDAPS_INPUT_FUZZ 4 /* input event threshold */ #define HDAPS_INPUT_FLAT 4 - -#define HDAPS_X_AXIS (1 << 0) -#define HDAPS_Y_AXIS (1 << 1) -#define HDAPS_BOTH_AXES (HDAPS_X_AXIS | HDAPS_Y_AXIS) - +#define KMACT_REMEMBER_PERIOD (HZ/10) /* keyboard/mouse persistence */ + +/* Input IDs */ +#define HDAPS_INPUT_VENDOR PCI_VENDOR_ID_IBM +#define HDAPS_INPUT_PRODUCT 0x5054 /* "TP", shared with thinkpad_acpi */ +#define HDAPS_INPUT_JS_VERSION 0x6801 /* Joystick emulation input device */ +#define HDAPS_INPUT_RAW_VERSION 0x4801 /* Raw accelerometer input device */ + +/* Axis orientation. */ +/* The unnatural bit-representation of inversions is for backward + * compatibility with the"invert=1" module parameter. */ +#define HDAPS_ORIENT_INVERT_XY 0x01 /* Invert both X and Y axes. */ +#define HDAPS_ORIENT_INVERT_X 0x02 /* Invert the X axis (uninvert if + * already inverted by INVERT_XY). */ +#define HDAPS_ORIENT_SWAP 0x04 /* Swap the axes. The swap occurs + * before inverting X or Y. */ +#define HDAPS_ORIENT_MAX 0x07 +#define HDAPS_ORIENT_UNDEFINED 0xFF /* Placeholder during initialization */ +#define HDAPS_ORIENT_INVERT_Y (HDAPS_ORIENT_INVERT_XY | HDAPS_ORIENT_INVERT_X) + +static struct timer_list hdaps_timer; static struct platform_device *pdev; -static struct input_polled_dev *hdaps_idev; -static unsigned int hdaps_invert; -static u8 km_activity; -static int rest_x; -static int rest_y; - -static DEFINE_MUTEX(hdaps_mtx); - -/* - * __get_latch - Get the value from a given port. Callers must hold hdaps_mtx. - */ -static inline u8 __get_latch(u16 port) +static struct input_dev *hdaps_idev; /* joystick-like device with fuzz */ +static struct input_dev *hdaps_idev_raw; /* raw hdaps sensor readouts */ +static unsigned int hdaps_invert = HDAPS_ORIENT_UNDEFINED; +static int needs_calibration; + +/* Configuration: */ +static int sampling_rate = 50; /* Sampling rate */ +static int oversampling_ratio = 5; /* Ratio between our sampling rate and + * EC accelerometer sampling rate */ +static int running_avg_filter_order = 2; /* EC running average filter order */ + +/* Latest state readout: */ +static int pos_x, pos_y; /* position */ +static int temperature; /* temperature */ +static int stale_readout = 1; /* last read invalid */ +static int rest_x, rest_y; /* calibrated rest position */ + +/* Last time we saw keyboard and mouse activity: */ +static u64 last_keyboard_jiffies = INITIAL_JIFFIES; +static u64 last_mouse_jiffies = INITIAL_JIFFIES; +static u64 last_update_jiffies = INITIAL_JIFFIES; + +/* input device use count */ +static int hdaps_users; +static DEFINE_MUTEX(hdaps_users_mtx); + +/* Some models require an axis transformation to the standard representation */ +static void transform_axes(int *x, int *y) { - return inb(port) & 0xff; + if (hdaps_invert & HDAPS_ORIENT_SWAP) { + int z; + z = *x; + *x = *y; + *y = z; + } + if (hdaps_invert & HDAPS_ORIENT_INVERT_XY) { + *x = -*x; + *y = -*y; + } + if (hdaps_invert & HDAPS_ORIENT_INVERT_X) + *x = -*x; } -/* - * __check_latch - Check a port latch for a given value. Returns zero if the - * port contains the given value. Callers must hold hdaps_mtx. +/** + * __hdaps_update - query current state, with locks already acquired + * @fast: if nonzero, do one quick attempt without retries. + * + * Query current accelerometer state and update global state variables. + * Also prefetches the next query. Caller must hold controller lock. */ -static inline int __check_latch(u16 port, u8 val) +static int __hdaps_update(int fast) { - if (__get_latch(port) == val) - return 0; - return -EINVAL; -} + /* Read data: */ + struct thinkpad_ec_row data; + int ret; -/* - * __wait_latch - Wait up to 100us for a port latch to get a certain value, - * returning zero if the value is obtained. Callers must hold hdaps_mtx. - */ -static int __wait_latch(u16 port, u8 val) -{ - unsigned int i; + data.mask = (1 << EC_ACCEL_IDX_READOUTS) | (1 << EC_ACCEL_IDX_KMACT) | + (3 << EC_ACCEL_IDX_YPOS1) | (3 << EC_ACCEL_IDX_XPOS1) | + (1 << EC_ACCEL_IDX_TEMP1) | (1 << EC_ACCEL_IDX_RETVAL); + if (fast) + ret = thinkpad_ec_try_read_row(&ec_accel_args, &data); + else + ret = thinkpad_ec_read_row(&ec_accel_args, &data); + thinkpad_ec_prefetch_row(&ec_accel_args); /* Prefetch even if error */ + if (ret) + return ret; - for (i = 0; i < 20; i++) { - if (!__check_latch(port, val)) - return 0; - udelay(5); + /* Check status: */ + if (data.val[EC_ACCEL_IDX_RETVAL] != 0x00) { + pr_warn("read RETVAL=0x%02x\n", + data.val[EC_ACCEL_IDX_RETVAL]); + return -EIO; + } + + if (data.val[EC_ACCEL_IDX_READOUTS] < 1) + return -EBUSY; /* no pending readout, try again later */ + + /* Parse position data: */ + pos_x = *(s16 *)(data.val+EC_ACCEL_IDX_XPOS1); + pos_y = *(s16 *)(data.val+EC_ACCEL_IDX_YPOS1); + transform_axes(&pos_x, &pos_y); + + /* Keyboard and mouse activity status is cleared as soon as it's read, + * so applications will eat each other's events. Thus we remember any + * event for KMACT_REMEMBER_PERIOD jiffies. + */ + if (data.val[EC_ACCEL_IDX_KMACT] & KEYBD_MASK) + last_keyboard_jiffies = get_jiffies_64(); + if (data.val[EC_ACCEL_IDX_KMACT] & MOUSE_MASK) + last_mouse_jiffies = get_jiffies_64(); + + temperature = data.val[EC_ACCEL_IDX_TEMP1]; + + last_update_jiffies = get_jiffies_64(); + stale_readout = 0; + if (needs_calibration) { + rest_x = pos_x; + rest_y = pos_y; + needs_calibration = 0; } - return -EIO; + return 0; } -/* - * __device_refresh - request a refresh from the accelerometer. Does not wait - * for refresh to complete. Callers must hold hdaps_mtx. +/** + * hdaps_update - acquire locks and query current state + * + * Query current accelerometer state and update global state variables. + * Also prefetches the next query. + * Retries until timeout if the accelerometer is not in ready status (common). + * Does its own locking. */ -static void __device_refresh(void) +static int hdaps_update(void) { - udelay(200); - if (inb(0x1604) != STATE_FRESH) { - outb(0x11, 0x1610); - outb(0x01, 0x161f); + u64 age = get_jiffies_64() - last_update_jiffies; + int total, ret; + + if (!stale_readout && age < (9*HZ)/(10*sampling_rate)) + return 0; /* already updated recently */ + for (total = 0; total < READ_TIMEOUT_MSECS; total += RETRY_MSECS) { + ret = thinkpad_ec_lock(); + if (ret) + return ret; + ret = __hdaps_update(0); + thinkpad_ec_unlock(); + + if (!ret) + return 0; + if (ret != -EBUSY) + break; + msleep(RETRY_MSECS); } + return ret; } -/* - * __device_refresh_sync - request a synchronous refresh from the - * accelerometer. We wait for the refresh to complete. Returns zero if - * successful and nonzero on error. Callers must hold hdaps_mtx. +/** + * hdaps_set_power - enable or disable power to the accelerometer. + * Returns zero on success and negative error code on failure. Can sleep. */ -static int __device_refresh_sync(void) +static int hdaps_set_power(int on) { - __device_refresh(); - return __wait_latch(0x1604, STATE_FRESH); + struct thinkpad_ec_row args = + { .mask = 0x0003, .val = {0x14, on?0x01:0x00} }; + struct thinkpad_ec_row data = { .mask = 0x8000 }; + int ret = thinkpad_ec_read_row(&args, &data); + if (ret) + return ret; + if (data.val[0xF] != 0x00) + return -EIO; + return 0; } -/* - * __device_complete - indicate to the accelerometer that we are done reading - * data, and then initiate an async refresh. Callers must hold hdaps_mtx. +/** + * hdaps_set_ec_config - set accelerometer parameters. + * @ec_rate: embedded controller sampling rate + * @order: embedded controller running average filter order + * (Normally we have @ec_rate = sampling_rate * oversampling_ratio.) + * Returns zero on success and negative error code on failure. Can sleep. */ -static inline void __device_complete(void) +static int hdaps_set_ec_config(int ec_rate, int order) { - inb(0x161f); - inb(0x1604); - __device_refresh(); + struct thinkpad_ec_row args = { .mask = 0x000F, + .val = {0x10, (u8)ec_rate, (u8)(ec_rate>>8), order} }; + struct thinkpad_ec_row data = { .mask = 0x8000 }; + int ret = thinkpad_ec_read_row(&args, &data); + pr_debug("setting ec_rate=%d, filter_order=%d\n", ec_rate, order); + if (ret) + return ret; + if (data.val[0xF] == 0x03) { + pr_warn("config param out of range\n"); + return -EINVAL; + } + if (data.val[0xF] == 0x06) { + pr_warn("config change already pending\n"); + return -EBUSY; + } + if (data.val[0xF] != 0x00) { + pr_warn("config change error, ret=%d\n", + data.val[0xF]); + return -EIO; + } + return 0; } -/* - * hdaps_readb_one - reads a byte from a single I/O port, placing the value in - * the given pointer. Returns zero on success or a negative error on failure. - * Can sleep. +/** + * hdaps_get_ec_config - get accelerometer parameters. + * @ec_rate: embedded controller sampling rate + * @order: embedded controller running average filter order + * Returns zero on success and negative error code on failure. Can sleep. */ -static int hdaps_readb_one(unsigned int port, u8 *val) +static int hdaps_get_ec_config(int *ec_rate, int *order) { - int ret; - - mutex_lock(&hdaps_mtx); - - /* do a sync refresh -- we need to be sure that we read fresh data */ - ret = __device_refresh_sync(); + const struct thinkpad_ec_row args = + { .mask = 0x0003, .val = {0x17, 0x82} }; + struct thinkpad_ec_row data = { .mask = 0x801F }; + int ret = thinkpad_ec_read_row(&args, &data); if (ret) - goto out; - - *val = inb(port); - __device_complete(); - -out: - mutex_unlock(&hdaps_mtx); - return ret; + return ret; + if (data.val[0xF] != 0x00) + return -EIO; + if (!(data.val[0x1] & 0x01)) + return -ENXIO; /* accelerometer polling not enabled */ + if (data.val[0x1] & 0x02) + return -EBUSY; /* config change in progress, retry later */ + *ec_rate = data.val[0x2] | ((int)(data.val[0x3]) << 8); + *order = data.val[0x4]; + return 0; } -/* __hdaps_read_pair - internal lockless helper for hdaps_read_pair(). */ -static int __hdaps_read_pair(unsigned int port1, unsigned int port2, - int *x, int *y) +/** + * hdaps_get_ec_mode - get EC accelerometer mode + * Returns zero on success and negative error code on failure. Can sleep. + */ +static int hdaps_get_ec_mode(u8 *mode) { - /* do a sync refresh -- we need to be sure that we read fresh data */ - if (__device_refresh_sync()) + const struct thinkpad_ec_row args = + { .mask = 0x0001, .val = {0x13} }; + struct thinkpad_ec_row data = { .mask = 0x8002 }; + int ret = thinkpad_ec_read_row(&args, &data); + if (ret) + return ret; + if (data.val[0xF] != 0x00) { + pr_warn("accelerometer not implemented (0x%02x)\n", + data.val[0xF]); return -EIO; - - *y = inw(port2); - *x = inw(port1); - km_activity = inb(HDAPS_PORT_KMACT); - __device_complete(); - - /* hdaps_invert is a bitvector to negate the axes */ - if (hdaps_invert & HDAPS_X_AXIS) - *x = -*x; - if (hdaps_invert & HDAPS_Y_AXIS) - *y = -*y; - + } + *mode = data.val[0x1]; return 0; } -/* - * hdaps_read_pair - reads the values from a pair of ports, placing the values - * in the given pointers. Returns zero on success. Can sleep. +/** + * hdaps_check_ec - checks something about the EC. + * Follows the clean-room spec for HDAPS; we don't know what it means. + * Returns zero on success and negative error code on failure. Can sleep. */ -static int hdaps_read_pair(unsigned int port1, unsigned int port2, - int *val1, int *val2) +static int hdaps_check_ec(void) { - int ret; - - mutex_lock(&hdaps_mtx); - ret = __hdaps_read_pair(port1, port2, val1, val2); - mutex_unlock(&hdaps_mtx); - - return ret; + const struct thinkpad_ec_row args = + { .mask = 0x0003, .val = {0x17, 0x81} }; + struct thinkpad_ec_row data = { .mask = 0x800E }; + int ret = thinkpad_ec_read_row(&args, &data); + if (ret) + return ret; + if (!((data.val[0x1] == 0x00 && data.val[0x2] == 0x60) || /* cleanroom spec */ + (data.val[0x1] == 0x01 && data.val[0x2] == 0x00)) || /* seen on T61 */ + data.val[0x3] != 0x00 || data.val[0xF] != 0x00) { + pr_warn("hdaps_check_ec: bad response (0x%x,0x%x,0x%x,0x%x)\n", + data.val[0x1], data.val[0x2], + data.val[0x3], data.val[0xF]); + return -EIO; + } + return 0; } -/* - * hdaps_device_init - initialize the accelerometer. Returns zero on success - * and negative error code on failure. Can sleep. +/** + * hdaps_device_init - initialize the accelerometer. + * + * Call several embedded controller functions to test and initialize the + * accelerometer. + * Returns zero on success and negative error code on failure. Can sleep. */ +#define FAILED_INIT(msg) pr_err("init failed at: %s\n", msg) static int hdaps_device_init(void) { - int total, ret = -ENXIO; + int ret; + u8 mode; - mutex_lock(&hdaps_mtx); + ret = thinkpad_ec_lock(); + if (ret) + return ret; - outb(0x13, 0x1610); - outb(0x01, 0x161f); - if (__wait_latch(0x161f, 0x00)) - goto out; + if (hdaps_get_ec_mode(&mode)) + { FAILED_INIT("hdaps_get_ec_mode failed"); goto bad; } - /* - * Most ThinkPads return 0x01. - * - * Others--namely the R50p, T41p, and T42p--return 0x03. These laptops - * have "inverted" axises. - * - * The 0x02 value occurs when the chip has been previously initialized. - */ - if (__check_latch(0x1611, 0x03) && - __check_latch(0x1611, 0x02) && - __check_latch(0x1611, 0x01)) - goto out; - - printk(KERN_DEBUG "hdaps: initial latch check good (0x%02x)\n", - __get_latch(0x1611)); + pr_debug("initial mode latch is 0x%02x\n", mode); + if (mode == 0x00) + { FAILED_INIT("accelerometer not available"); goto bad; } - outb(0x17, 0x1610); - outb(0x81, 0x1611); - outb(0x01, 0x161f); - if (__wait_latch(0x161f, 0x00)) - goto out; - if (__wait_latch(0x1611, 0x00)) - goto out; - if (__wait_latch(0x1612, 0x60)) - goto out; - if (__wait_latch(0x1613, 0x00)) - goto out; - outb(0x14, 0x1610); - outb(0x01, 0x1611); - outb(0x01, 0x161f); - if (__wait_latch(0x161f, 0x00)) - goto out; - outb(0x10, 0x1610); - outb(0xc8, 0x1611); - outb(0x00, 0x1612); - outb(0x02, 0x1613); - outb(0x01, 0x161f); - if (__wait_latch(0x161f, 0x00)) - goto out; - if (__device_refresh_sync()) - goto out; - if (__wait_latch(0x1611, 0x00)) - goto out; + if (hdaps_check_ec()) + { FAILED_INIT("hdaps_check_ec failed"); goto bad; } - /* we have done our dance, now let's wait for the applause */ - for (total = INIT_TIMEOUT_MSECS; total > 0; total -= INIT_WAIT_MSECS) { - int x, y; + if (hdaps_set_power(1)) + { FAILED_INIT("hdaps_set_power failed"); goto bad; } - /* a read of the device helps push it into action */ - __hdaps_read_pair(HDAPS_PORT_XPOS, HDAPS_PORT_YPOS, &x, &y); - if (!__wait_latch(0x1611, 0x02)) { - ret = 0; - break; - } + if (hdaps_set_ec_config(sampling_rate*oversampling_ratio, + running_avg_filter_order)) + { FAILED_INIT("hdaps_set_ec_config failed"); goto bad; } - msleep(INIT_WAIT_MSECS); - } + thinkpad_ec_invalidate(); + udelay(200); -out: - mutex_unlock(&hdaps_mtx); + /* Just prefetch instead of reading, to avoid ~1sec delay on load */ + ret = thinkpad_ec_prefetch_row(&ec_accel_args); + if (ret) + { FAILED_INIT("initial prefetch failed"); goto bad; } + goto good; +bad: + thinkpad_ec_invalidate(); + ret = -ENXIO; +good: + stale_readout = 1; + thinkpad_ec_unlock(); return ret; } +/** + * hdaps_device_shutdown - power off the accelerometer + * Returns nonzero on failure. Can sleep. + */ +static int hdaps_device_shutdown(void) +{ + int ret; + ret = hdaps_set_power(0); + if (ret) { + pr_warn("cannot power off\n"); + return ret; + } + ret = hdaps_set_ec_config(0, 1); + if (ret) + pr_warn("cannot stop EC sampling\n"); + return ret; +} /* Device model stuff */ @@ -306,13 +424,29 @@ static int hdaps_probe(struct platform_device *dev) } #ifdef CONFIG_PM_SLEEP +static int hdaps_suspend(struct device *dev) +{ + /* Don't do hdaps polls until resume re-initializes the sensor. */ + del_timer_sync(&hdaps_timer); + hdaps_device_shutdown(); /* ignore errors, effect is negligible */ + return 0; +} + static int hdaps_resume(struct device *dev) { - return hdaps_device_init(); + int ret = hdaps_device_init(); + if (ret) + return ret; + + mutex_lock(&hdaps_users_mtx); + if (hdaps_users) + mod_timer(&hdaps_timer, jiffies + HZ/sampling_rate); + mutex_unlock(&hdaps_users_mtx); + return 0; } #endif -static SIMPLE_DEV_PM_OPS(hdaps_pm, NULL, hdaps_resume); +static SIMPLE_DEV_PM_OPS(hdaps_pm, hdaps_suspend, hdaps_resume); static struct platform_driver hdaps_driver = { .probe = hdaps_probe, @@ -322,30 +456,47 @@ static struct platform_driver hdaps_driver = { }, }; -/* - * hdaps_calibrate - Set our "resting" values. Callers must hold hdaps_mtx. +/** + * hdaps_calibrate - set our "resting" values. + * Does its own locking. */ static void hdaps_calibrate(void) { - __hdaps_read_pair(HDAPS_PORT_XPOS, HDAPS_PORT_YPOS, &rest_x, &rest_y); + needs_calibration = 1; + hdaps_update(); + /* If that fails, the mousedev poll will take care of things later. */ } -static void hdaps_mousedev_poll(struct input_polled_dev *dev) +/* Timer handler for updating the input device. Runs in softirq context, + * so avoid lenghty or blocking operations. + */ +static void hdaps_mousedev_poll(unsigned long unused) { - struct input_dev *input_dev = dev->input; - int x, y; + int ret; - mutex_lock(&hdaps_mtx); + stale_readout = 1; - if (__hdaps_read_pair(HDAPS_PORT_XPOS, HDAPS_PORT_YPOS, &x, &y)) - goto out; + /* Cannot sleep. Try nonblockingly. If we fail, try again later. */ + if (thinkpad_ec_try_lock()) + goto keep_active; - input_report_abs(input_dev, ABS_X, x - rest_x); - input_report_abs(input_dev, ABS_Y, y - rest_y); - input_sync(input_dev); + ret = __hdaps_update(1); /* fast update, we're in softirq context */ + thinkpad_ec_unlock(); + /* Any of "successful", "not yet ready" and "not prefetched"? */ + if (ret != 0 && ret != -EBUSY && ret != -ENODATA) { + pr_err("poll failed, disabling updates\n"); + return; + } -out: - mutex_unlock(&hdaps_mtx); +keep_active: + /* Even if we failed now, pos_x,y may have been updated earlier: */ + input_report_abs(hdaps_idev, ABS_X, pos_x - rest_x); + input_report_abs(hdaps_idev, ABS_Y, pos_y - rest_y); + input_sync(hdaps_idev); + input_report_abs(hdaps_idev_raw, ABS_X, pos_x); + input_report_abs(hdaps_idev_raw, ABS_Y, pos_y); + input_sync(hdaps_idev_raw); + mod_timer(&hdaps_timer, jiffies + HZ/sampling_rate); } @@ -354,65 +505,41 @@ out: static ssize_t hdaps_position_show(struct device *dev, struct device_attribute *attr, char *buf) { - int ret, x, y; - - ret = hdaps_read_pair(HDAPS_PORT_XPOS, HDAPS_PORT_YPOS, &x, &y); - if (ret) - return ret; - - return sprintf(buf, "(%d,%d)\n", x, y); -} - -static ssize_t hdaps_variance_show(struct device *dev, - struct device_attribute *attr, char *buf) -{ - int ret, x, y; - - ret = hdaps_read_pair(HDAPS_PORT_XVAR, HDAPS_PORT_YVAR, &x, &y); + int ret = hdaps_update(); if (ret) return ret; - - return sprintf(buf, "(%d,%d)\n", x, y); + return sprintf(buf, "(%d,%d)\n", pos_x, pos_y); } static ssize_t hdaps_temp1_show(struct device *dev, struct device_attribute *attr, char *buf) { - u8 uninitialized_var(temp); - int ret; - - ret = hdaps_readb_one(HDAPS_PORT_TEMP1, &temp); - if (ret) - return ret; - - return sprintf(buf, "%u\n", temp); -} - -static ssize_t hdaps_temp2_show(struct device *dev, - struct device_attribute *attr, char *buf) -{ - u8 uninitialized_var(temp); - int ret; - - ret = hdaps_readb_one(HDAPS_PORT_TEMP2, &temp); + int ret = hdaps_update(); if (ret) return ret; - - return sprintf(buf, "%u\n", temp); + return sprintf(buf, "%d\n", temperature); } static ssize_t hdaps_keyboard_activity_show(struct device *dev, struct device_attribute *attr, char *buf) { - return sprintf(buf, "%u\n", KEYBD_ISSET(km_activity)); + int ret = hdaps_update(); + if (ret) + return ret; + return sprintf(buf, "%u\n", + get_jiffies_64() < last_keyboard_jiffies + KMACT_REMEMBER_PERIOD); } static ssize_t hdaps_mouse_activity_show(struct device *dev, struct device_attribute *attr, char *buf) { - return sprintf(buf, "%u\n", MOUSE_ISSET(km_activity)); + int ret = hdaps_update(); + if (ret) + return ret; + return sprintf(buf, "%u\n", + get_jiffies_64() < last_mouse_jiffies + KMACT_REMEMBER_PERIOD); } static ssize_t hdaps_calibrate_show(struct device *dev, @@ -425,10 +552,7 @@ static ssize_t hdaps_calibrate_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t count) { - mutex_lock(&hdaps_mtx); hdaps_calibrate(); - mutex_unlock(&hdaps_mtx); - return count; } @@ -445,7 +569,7 @@ static ssize_t hdaps_invert_store(struct device *dev, int invert; if (sscanf(buf, "%d", &invert) != 1 || - invert < 0 || invert > HDAPS_BOTH_AXES) + invert < 0 || invert > HDAPS_ORIENT_MAX) return -EINVAL; hdaps_invert = invert; @@ -454,24 +578,128 @@ static ssize_t hdaps_invert_store(struct device *dev, return count; } +static ssize_t hdaps_sampling_rate_show( + struct device *dev, struct device_attribute *attr, char *buf) +{ + return sprintf(buf, "%d\n", sampling_rate); +} + +static ssize_t hdaps_sampling_rate_store( + struct device *dev, struct device_attribute *attr, + const char *buf, size_t count) +{ + int rate, ret; + if (sscanf(buf, "%d", &rate) != 1 || rate > HZ || rate <= 0) { + pr_warn("must have 0ident); - return 1; -} - /* hdaps_dmi_match_invert - found an inverted match. */ static int __init hdaps_dmi_match_invert(const struct dmi_system_id *id) { - hdaps_invert = (unsigned long)id->driver_data; - pr_info("inverting axis (%u) readings\n", hdaps_invert); - return hdaps_dmi_match(id); + unsigned int orient = (kernel_ulong_t) id->driver_data; + hdaps_invert = orient; + pr_info("%s detected, setting orientation %u\n", id->ident, orient); + return 1; /* stop enumeration */ } -#define HDAPS_DMI_MATCH_INVERT(vendor, model, axes) { \ +#define HDAPS_DMI_MATCH_INVERT(vendor, model, orient) { \ .ident = vendor " " model, \ .callback = hdaps_dmi_match_invert, \ - .driver_data = (void *)axes, \ + .driver_data = (void *)(orient), \ .matches = { \ DMI_MATCH(DMI_BOARD_VENDOR, vendor), \ DMI_MATCH(DMI_PRODUCT_VERSION, model) \ } \ } -#define HDAPS_DMI_MATCH_NORMAL(vendor, model) \ - HDAPS_DMI_MATCH_INVERT(vendor, model, 0) - -/* Note that HDAPS_DMI_MATCH_NORMAL("ThinkPad T42") would match - "ThinkPad T42p", so the order of the entries matters. - If your ThinkPad is not recognized, please update to latest - BIOS. This is especially the case for some R52 ThinkPads. */ -static const struct dmi_system_id hdaps_whitelist[] __initconst = { - HDAPS_DMI_MATCH_INVERT("IBM", "ThinkPad R50p", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_NORMAL("IBM", "ThinkPad R50"), - HDAPS_DMI_MATCH_NORMAL("IBM", "ThinkPad R51"), - HDAPS_DMI_MATCH_NORMAL("IBM", "ThinkPad R52"), - HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad R61i", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad R61", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_INVERT("IBM", "ThinkPad T41p", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_NORMAL("IBM", "ThinkPad T41"), - HDAPS_DMI_MATCH_INVERT("IBM", "ThinkPad T42p", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_NORMAL("IBM", "ThinkPad T42"), - HDAPS_DMI_MATCH_NORMAL("IBM", "ThinkPad T43"), - HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T400", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T60", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T61p", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T61", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_NORMAL("IBM", "ThinkPad X40"), - HDAPS_DMI_MATCH_INVERT("IBM", "ThinkPad X41", HDAPS_Y_AXIS), - HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X60", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X61s", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X61", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_NORMAL("IBM", "ThinkPad Z60m"), - HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad Z61m", HDAPS_BOTH_AXES), - HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad Z61p", HDAPS_BOTH_AXES), +/* List of models with abnormal axis configuration. + Note that HDAPS_DMI_MATCH_NORMAL("ThinkPad T42") would match + "ThinkPad T42p", and enumeration stops after first match, + so the order of the entries matters. */ +const struct dmi_system_id hdaps_whitelist[] __initconst = { + HDAPS_DMI_MATCH_INVERT("IBM", "ThinkPad R50p", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("IBM", "ThinkPad R60", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("IBM", "ThinkPad T41p", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("IBM", "ThinkPad T42p", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("IBM", "ThinkPad X40", HDAPS_ORIENT_INVERT_Y), + HDAPS_DMI_MATCH_INVERT("IBM", "ThinkPad X41", HDAPS_ORIENT_INVERT_Y), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad R60", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad R61", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad R400", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad R500", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T60", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T61", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X60 Tablet", HDAPS_ORIENT_INVERT_Y), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X60s", HDAPS_ORIENT_INVERT_Y), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X60", HDAPS_ORIENT_SWAP | HDAPS_ORIENT_INVERT_X), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X61", HDAPS_ORIENT_SWAP | HDAPS_ORIENT_INVERT_X), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T400s", HDAPS_ORIENT_INVERT_X), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T400", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T410s", HDAPS_ORIENT_SWAP), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T410", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T500", HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad T510", HDAPS_ORIENT_SWAP | HDAPS_ORIENT_INVERT_X | HDAPS_ORIENT_INVERT_Y), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad W510", HDAPS_ORIENT_MAX), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad W520", HDAPS_ORIENT_MAX), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X200s", HDAPS_ORIENT_SWAP | HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X200", HDAPS_ORIENT_SWAP | HDAPS_ORIENT_INVERT_X | HDAPS_ORIENT_INVERT_Y), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X201 Tablet", HDAPS_ORIENT_SWAP | HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X201s", HDAPS_ORIENT_SWAP | HDAPS_ORIENT_INVERT_XY), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X201", HDAPS_ORIENT_SWAP | HDAPS_ORIENT_INVERT_X), + HDAPS_DMI_MATCH_INVERT("LENOVO", "ThinkPad X220", HDAPS_ORIENT_SWAP), { .ident = NULL } }; static int __init hdaps_init(void) { - struct input_dev *idev; int ret; - if (!dmi_check_system(hdaps_whitelist)) { - pr_warn("supported laptop not found!\n"); - ret = -ENODEV; - goto out; - } - - if (!request_region(HDAPS_LOW_PORT, HDAPS_NR_PORTS, "hdaps")) { - ret = -ENXIO; - goto out; - } + /* Determine axis orientation orientation */ + if (hdaps_invert == HDAPS_ORIENT_UNDEFINED) /* set by module param? */ + if (dmi_check_system(hdaps_whitelist) < 1) /* in whitelist? */ + hdaps_invert = 0; /* default */ + /* Init timer before platform_driver_register, in case of suspend */ + init_timer(&hdaps_timer); + hdaps_timer.function = hdaps_mousedev_poll; ret = platform_driver_register(&hdaps_driver); if (ret) - goto out_region; + goto out; pdev = platform_device_register_simple("hdaps", -1, NULL, 0); if (IS_ERR(pdev)) { @@ -571,47 +793,79 @@ static int __init hdaps_init(void) if (ret) goto out_device; - hdaps_idev = input_allocate_polled_device(); + hdaps_idev = input_allocate_device(); if (!hdaps_idev) { ret = -ENOMEM; goto out_group; } - hdaps_idev->poll = hdaps_mousedev_poll; - hdaps_idev->poll_interval = HDAPS_POLL_INTERVAL; - - /* initial calibrate for the input device */ - hdaps_calibrate(); + hdaps_idev_raw = input_allocate_device(); + if (!hdaps_idev_raw) { + ret = -ENOMEM; + goto out_idev_first; + } - /* initialize the input class */ - idev = hdaps_idev->input; - idev->name = "hdaps"; - idev->phys = "isa1600/input0"; - idev->id.bustype = BUS_ISA; - idev->dev.parent = &pdev->dev; - idev->evbit[0] = BIT_MASK(EV_ABS); - input_set_abs_params(idev, ABS_X, + /* calibration for the input device (deferred to avoid delay) */ + needs_calibration = 1; + + /* initialize the joystick-like fuzzed input device */ + hdaps_idev->name = "ThinkPad HDAPS joystick emulation"; + hdaps_idev->phys = "hdaps/input0"; + hdaps_idev->id.bustype = BUS_HOST; + hdaps_idev->id.vendor = HDAPS_INPUT_VENDOR; + hdaps_idev->id.product = HDAPS_INPUT_PRODUCT; + hdaps_idev->id.version = HDAPS_INPUT_JS_VERSION; +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,25) + hdaps_idev->cdev.dev = &pdev->dev; +#endif + hdaps_idev->evbit[0] = BIT(EV_ABS); + hdaps_idev->open = hdaps_mousedev_open; + hdaps_idev->close = hdaps_mousedev_close; + input_set_abs_params(hdaps_idev, ABS_X, -256, 256, HDAPS_INPUT_FUZZ, HDAPS_INPUT_FLAT); - input_set_abs_params(idev, ABS_Y, + input_set_abs_params(hdaps_idev, ABS_Y, -256, 256, HDAPS_INPUT_FUZZ, HDAPS_INPUT_FLAT); - ret = input_register_polled_device(hdaps_idev); + ret = input_register_device(hdaps_idev); if (ret) goto out_idev; - pr_info("driver successfully loaded\n"); + /* initialize the raw data input device */ + hdaps_idev_raw->name = "ThinkPad HDAPS accelerometer data"; + hdaps_idev_raw->phys = "hdaps/input1"; + hdaps_idev_raw->id.bustype = BUS_HOST; + hdaps_idev_raw->id.vendor = HDAPS_INPUT_VENDOR; + hdaps_idev_raw->id.product = HDAPS_INPUT_PRODUCT; + hdaps_idev_raw->id.version = HDAPS_INPUT_RAW_VERSION; +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,25) + hdaps_idev_raw->cdev.dev = &pdev->dev; +#endif + hdaps_idev_raw->evbit[0] = BIT(EV_ABS); + hdaps_idev_raw->open = hdaps_mousedev_open; + hdaps_idev_raw->close = hdaps_mousedev_close; + input_set_abs_params(hdaps_idev_raw, ABS_X, -32768, 32767, 0, 0); + input_set_abs_params(hdaps_idev_raw, ABS_Y, -32768, 32767, 0, 0); + + ret = input_register_device(hdaps_idev_raw); + if (ret) + goto out_idev_reg_first; + + pr_info("driver successfully loaded.\n"); return 0; +out_idev_reg_first: + input_unregister_device(hdaps_idev); out_idev: - input_free_polled_device(hdaps_idev); + input_free_device(hdaps_idev_raw); +out_idev_first: + input_free_device(hdaps_idev); out_group: sysfs_remove_group(&pdev->dev.kobj, &hdaps_attribute_group); out_device: platform_device_unregister(pdev); out_driver: platform_driver_unregister(&hdaps_driver); -out_region: - release_region(HDAPS_LOW_PORT, HDAPS_NR_PORTS); + hdaps_device_shutdown(); out: pr_warn("driver init failed (ret=%d)!\n", ret); return ret; @@ -619,12 +873,12 @@ out: static void __exit hdaps_exit(void) { - input_unregister_polled_device(hdaps_idev); - input_free_polled_device(hdaps_idev); + input_unregister_device(hdaps_idev_raw); + input_unregister_device(hdaps_idev); + hdaps_device_shutdown(); /* ignore errors, effect is negligible */ sysfs_remove_group(&pdev->dev.kobj, &hdaps_attribute_group); platform_device_unregister(pdev); platform_driver_unregister(&hdaps_driver); - release_region(HDAPS_LOW_PORT, HDAPS_NR_PORTS); pr_info("driver unloaded\n"); } @@ -632,9 +886,8 @@ static void __exit hdaps_exit(void) module_init(hdaps_init); module_exit(hdaps_exit); -module_param_named(invert, hdaps_invert, int, 0); -MODULE_PARM_DESC(invert, "invert data along each axis. 1 invert x-axis, " - "2 invert y-axis, 3 invert both axes."); +module_param_named(invert, hdaps_invert, uint, 0); +MODULE_PARM_DESC(invert, "axis orientation code"); MODULE_AUTHOR("Robert Love"); MODULE_DESCRIPTION("IBM Hard Drive Active Protection System (HDAPS) driver"); diff --git b/drivers/platform/x86/thinkpad_ec.c b/drivers/platform/x86/thinkpad_ec.c new file mode 100644 index 0000000..597614b --- /dev/null +++ b/drivers/platform/x86/thinkpad_ec.c @@ -0,0 +1,513 @@ +/* + * thinkpad_ec.c - ThinkPad embedded controller LPC3 functions + * + * The embedded controller on ThinkPad laptops has a non-standard interface, + * where LPC channel 3 of the H8S EC chip is hooked up to IO ports + * 0x1600-0x161F and implements (a special case of) the H8S LPC protocol. + * The EC LPC interface provides various system management services (currently + * known: battery information and accelerometer readouts). This driver + * provides access and mutual exclusion for the EC interface. +* + * The LPC protocol and terminology are documented here: + * "H8S/2104B Group Hardware Manual", + * http://documentation.renesas.com/eng/products/mpumcu/rej09b0300_2140bhm.pdf + * + * Copyright (C) 2006-2007 Shem Multinymous + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,26) + #include +#else + #include +#endif + +#define TP_VERSION "0.42" + +MODULE_AUTHOR("Shem Multinymous"); +MODULE_DESCRIPTION("ThinkPad embedded controller hardware access"); +MODULE_VERSION(TP_VERSION); +MODULE_LICENSE("GPL"); + +/* IO ports used by embedded controller LPC channel 3: */ +#define TPC_BASE_PORT 0x1600 +#define TPC_NUM_PORTS 0x20 +#define TPC_STR3_PORT 0x1604 /* Reads H8S EC register STR3 */ +#define TPC_TWR0_PORT 0x1610 /* Mapped to H8S EC register TWR0MW/SW */ +#define TPC_TWR15_PORT 0x161F /* Mapped to H8S EC register TWR15. */ + /* (and port TPC_TWR0_PORT+i is mapped to H8S reg TWRi for 00x%02x", \ + msg, args->val[0x0], args->val[0xF], code) + +/* State of request prefetching: */ +static u8 prefetch_arg0, prefetch_argF; /* Args of last prefetch */ +static u64 prefetch_jiffies; /* time of prefetch, or: */ +#define TPC_PREFETCH_NONE INITIAL_JIFFIES /* No prefetch */ +#define TPC_PREFETCH_JUNK (INITIAL_JIFFIES+1) /* Ignore prefetch */ + +/* Locking: */ +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,37) +static DECLARE_MUTEX(thinkpad_ec_mutex); +#else +static DEFINE_SEMAPHORE(thinkpad_ec_mutex); +#endif + +/* Kludge in case the ACPI DSDT reserves the ports we need. */ +static bool force_io; /* Willing to do IO to ports we couldn't reserve? */ +static int reserved_io; /* Successfully reserved the ports? */ +module_param_named(force_io, force_io, bool, 0600); +MODULE_PARM_DESC(force_io, "Force IO even if region already reserved (0=off, 1=on)"); + +/** + * thinkpad_ec_lock - get lock on the ThinkPad EC + * + * Get exclusive lock for accesing the ThinkPad embedded controller LPC3 + * interface. Returns 0 iff lock acquired. + */ +int thinkpad_ec_lock(void) +{ + int ret; + ret = down_interruptible(&thinkpad_ec_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(thinkpad_ec_lock); + +/** + * thinkpad_ec_try_lock - try getting lock on the ThinkPad EC + * + * Try getting an exclusive lock for accesing the ThinkPad embedded + * controller LPC3. Returns immediately if lock is not available; neither + * blocks nor sleeps. Returns 0 iff lock acquired . + */ +int thinkpad_ec_try_lock(void) +{ + return down_trylock(&thinkpad_ec_mutex); +} +EXPORT_SYMBOL_GPL(thinkpad_ec_try_lock); + +/** + * thinkpad_ec_unlock - release lock on ThinkPad EC + * + * Release a previously acquired exclusive lock on the ThinkPad ebmedded + * controller LPC3 interface. + */ +void thinkpad_ec_unlock(void) +{ + up(&thinkpad_ec_mutex); +} +EXPORT_SYMBOL_GPL(thinkpad_ec_unlock); + +/** + * thinkpad_ec_request_row - tell embedded controller to prepare a row + * @args Input register arguments + * + * Requests a data row by writing to H8S LPC registers TRW0 through TWR15 (or + * a subset thereof) following the protocol prescribed by the "H8S/2104B Group + * Hardware Manual". Does sanity checks via status register STR3. + */ +static int thinkpad_ec_request_row(const struct thinkpad_ec_row *args) +{ + u8 str3; + int i; + + /* EC protocol requires write to TWR0 (function code): */ + if (!(args->mask & 0x0001)) { + printk(KERN_ERR MSG_FMT("bad args->mask=0x%02x", args->mask)); + return -EINVAL; + } + + /* Check initial STR3 status: */ + str3 = inb(TPC_STR3_PORT) & H8S_STR3_MASK; + if (str3 & H8S_STR3_OBF3B) { /* data already pending */ + inb(TPC_TWR15_PORT); /* marks end of previous transaction */ + if (prefetch_jiffies == TPC_PREFETCH_NONE) + printk(KERN_WARNING REQ_FMT( + "EC has result from unrequested transaction", + str3)); + return -EBUSY; /* EC will be ready in a few usecs */ + } else if (str3 == H8S_STR3_SWMF) { /* busy with previous request */ + if (prefetch_jiffies == TPC_PREFETCH_NONE) + printk(KERN_WARNING REQ_FMT( + "EC is busy with unrequested transaction", + str3)); + return -EBUSY; /* data will be pending in a few usecs */ + } else if (str3 != 0x00) { /* unexpected status? */ + printk(KERN_WARNING REQ_FMT("unexpected initial STR3", str3)); + return -EIO; + } + + /* Send TWR0MW: */ + outb(args->val[0], TPC_TWR0_PORT); + str3 = inb(TPC_STR3_PORT) & H8S_STR3_MASK; + if (str3 != H8S_STR3_MWMF) { /* not accepted? */ + printk(KERN_WARNING REQ_FMT("arg0 rejected", str3)); + return -EIO; + } + + /* Send TWR1 through TWR14: */ + for (i = 1; i < TP_CONTROLLER_ROW_LEN-1; i++) + if ((args->mask>>i)&1) + outb(args->val[i], TPC_TWR0_PORT+i); + + /* Send TWR15 (default to 0x01). This marks end of command. */ + outb((args->mask & 0x8000) ? args->val[0xF] : 0x01, TPC_TWR15_PORT); + + /* Wait until EC starts writing its reply (~60ns on average). + * Releasing locks before this happens may cause an EC hang + * due to firmware bug! + */ + for (i = 0; i < TPC_REQUEST_RETRIES; i++) { + str3 = inb(TPC_STR3_PORT) & H8S_STR3_MASK; + if (str3 & H8S_STR3_SWMF) /* EC started replying */ + return 0; + else if (!(str3 & ~(H8S_STR3_IBF3B|H8S_STR3_MWMF))) + /* Normal progress (the EC hasn't seen the request + * yet, or is processing it). Wait it out. */ + ndelay(TPC_REQUEST_NDELAY); + else { /* weird EC status */ + printk(KERN_WARNING + REQ_FMT("bad end STR3", str3)); + return -EIO; + } + } + printk(KERN_WARNING REQ_FMT("EC is mysteriously silent", str3)); + return -EIO; +} + +/** + * thinkpad_ec_read_data - read pre-requested row-data from EC + * @args Input register arguments of pre-requested rows + * @data Output register values + * + * Reads current row data from the controller, assuming it's already + * requested. Follows the H8S spec for register access and status checks. + */ +static int thinkpad_ec_read_data(const struct thinkpad_ec_row *args, + struct thinkpad_ec_row *data) +{ + int i; + u8 str3 = inb(TPC_STR3_PORT) & H8S_STR3_MASK; + /* Once we make a request, STR3 assumes the sequence of values listed + * in the following 'if' as it reads the request and writes its data. + * It takes about a few dozen nanosecs total, with very high variance. + */ + if (str3 == (H8S_STR3_IBF3B|H8S_STR3_MWMF) || + str3 == 0x00 || /* the 0x00 is indistinguishable from idle EC! */ + str3 == H8S_STR3_SWMF) + return -EBUSY; /* not ready yet */ + /* Finally, the EC signals output buffer full: */ + if (str3 != (H8S_STR3_OBF3B|H8S_STR3_SWMF)) { + printk(KERN_WARNING + REQ_FMT("bad initial STR3", str3)); + return -EIO; + } + + /* Read first byte (signals start of read transactions): */ + data->val[0] = inb(TPC_TWR0_PORT); + /* Optionally read 14 more bytes: */ + for (i = 1; i < TP_CONTROLLER_ROW_LEN-1; i++) + if ((data->mask >> i)&1) + data->val[i] = inb(TPC_TWR0_PORT+i); + /* Read last byte from 0x161F (signals end of read transaction): */ + data->val[0xF] = inb(TPC_TWR15_PORT); + + /* Readout still pending? */ + str3 = inb(TPC_STR3_PORT) & H8S_STR3_MASK; + if (str3 & H8S_STR3_OBF3B) + printk(KERN_WARNING + REQ_FMT("OBF3B=1 after read", str3)); + /* If port 0x161F returns 0x80 too often, the EC may lock up. Warn: */ + if (data->val[0xF] == 0x80) + printk(KERN_WARNING + REQ_FMT("0x161F reports error", data->val[0xF])); + return 0; +} + +/** + * thinkpad_ec_is_row_fetched - is the given row currently prefetched? + * + * To keep things simple we compare only the first and last args; + * this suffices for all known cases. + */ +static int thinkpad_ec_is_row_fetched(const struct thinkpad_ec_row *args) +{ + return (prefetch_jiffies != TPC_PREFETCH_NONE) && + (prefetch_jiffies != TPC_PREFETCH_JUNK) && + (prefetch_arg0 == args->val[0]) && + (prefetch_argF == args->val[0xF]) && + (get_jiffies_64() < prefetch_jiffies + TPC_PREFETCH_TIMEOUT); +} + +/** + * thinkpad_ec_read_row - request and read data from ThinkPad EC + * @args Input register arguments + * @data Output register values + * + * Read a data row from the ThinkPad embedded controller LPC3 interface. + * Does fetching and retrying if needed. The row is specified by an + * array of 16 bytes, some of which may be undefined (but the first is + * mandatory). These bytes are given in @args->val[], where @args->val[i] is + * used iff (@args->mask>>i)&1). The resulting row data is stored in + * @data->val[], but is only guaranteed to be valid for indices corresponding + * to set bit in @data->mask. That is, if @data->mask&(1<val[i] is undefined. + * + * Returns -EBUSY on transient error and -EIO on abnormal condition. + * Caller must hold controller lock. + */ +int thinkpad_ec_read_row(const struct thinkpad_ec_row *args, + struct thinkpad_ec_row *data) +{ + int retries, ret; + + if (thinkpad_ec_is_row_fetched(args)) + goto read_row; /* already requested */ + + /* Request the row */ + for (retries = 0; retries < TPC_READ_RETRIES; ++retries) { + ret = thinkpad_ec_request_row(args); + if (!ret) + goto read_row; + if (ret != -EBUSY) + break; + ndelay(TPC_READ_NDELAY); + } + printk(KERN_ERR REQ_FMT("failed requesting row", ret)); + goto out; + +read_row: + /* Read the row's data */ + for (retries = 0; retries < TPC_READ_RETRIES; ++retries) { + ret = thinkpad_ec_read_data(args, data); + if (!ret) + goto out; + if (ret != -EBUSY) + break; + ndelay(TPC_READ_NDELAY); + } + + printk(KERN_ERR REQ_FMT("failed waiting for data", ret)); + +out: + prefetch_jiffies = TPC_PREFETCH_JUNK; + return ret; +} +EXPORT_SYMBOL_GPL(thinkpad_ec_read_row); + +/** + * thinkpad_ec_try_read_row - try reading prefetched data from ThinkPad EC + * @args Input register arguments + * @data Output register values + * + * Try reading a data row from the ThinkPad embedded controller LPC3 + * interface, if this raw was recently prefetched using + * thinkpad_ec_prefetch_row(). Does not fetch, retry or block. + * The parameters have the same meaning as in thinkpad_ec_read_row(). + * + * Returns -EBUSY is data not ready and -ENODATA if row not prefetched. + * Caller must hold controller lock. + */ +int thinkpad_ec_try_read_row(const struct thinkpad_ec_row *args, + struct thinkpad_ec_row *data) +{ + int ret; + if (!thinkpad_ec_is_row_fetched(args)) { + ret = -ENODATA; + } else { + ret = thinkpad_ec_read_data(args, data); + if (!ret) + prefetch_jiffies = TPC_PREFETCH_NONE; /* eaten up */ + } + return ret; +} +EXPORT_SYMBOL_GPL(thinkpad_ec_try_read_row); + +/** + * thinkpad_ec_prefetch_row - prefetch data from ThinkPad EC + * @args Input register arguments + * + * Prefetch a data row from the ThinkPad embedded controller LCP3 + * interface. A subsequent call to thinkpad_ec_read_row() with the + * same arguments will be faster, and a subsequent call to + * thinkpad_ec_try_read_row() stands a good chance of succeeding if + * done neither too soon nor too late. See + * thinkpad_ec_read_row() for the meaning of @args. + * + * Returns -EBUSY on transient error and -EIO on abnormal condition. + * Caller must hold controller lock. + */ +int thinkpad_ec_prefetch_row(const struct thinkpad_ec_row *args) +{ + int ret; + ret = thinkpad_ec_request_row(args); + if (ret) { + prefetch_jiffies = TPC_PREFETCH_JUNK; + } else { + prefetch_jiffies = get_jiffies_64(); + prefetch_arg0 = args->val[0x0]; + prefetch_argF = args->val[0xF]; + } + return ret; +} +EXPORT_SYMBOL_GPL(thinkpad_ec_prefetch_row); + +/** + * thinkpad_ec_invalidate - invalidate prefetched ThinkPad EC data + * + * Invalidate the data prefetched via thinkpad_ec_prefetch_row() from the + * ThinkPad embedded controller LPC3 interface. + * Must be called before unlocking by any code that accesses the controller + * ports directly. + */ +void thinkpad_ec_invalidate(void) +{ + prefetch_jiffies = TPC_PREFETCH_JUNK; +} +EXPORT_SYMBOL_GPL(thinkpad_ec_invalidate); + + +/*** Checking for EC hardware ***/ + +/** + * thinkpad_ec_test - verify the EC is present and follows protocol + * + * Ensure the EC LPC3 channel really works on this machine by making + * an EC request and seeing if the EC follows the documented H8S protocol. + * The requested row just reads battery status, so it should be harmless to + * access it (on a correct EC). + * This test writes to IO ports, so execute only after checking DMI. + */ +static int __init thinkpad_ec_test(void) +{ + int ret; + const struct thinkpad_ec_row args = /* battery 0 basic status */ + { .mask = 0x8001, .val = {0x01,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0x00} }; + struct thinkpad_ec_row data = { .mask = 0x0000 }; + ret = thinkpad_ec_lock(); + if (ret) + return ret; + ret = thinkpad_ec_read_row(&args, &data); + thinkpad_ec_unlock(); + return ret; +} + +/* Search all DMI device names of a given type for a substring */ +static int __init dmi_find_substring(int type, const char *substr) +{ + const struct dmi_device *dev = NULL; + while ((dev = dmi_find_device(type, NULL, dev))) { + if (strstr(dev->name, substr)) + return 1; + } + return 0; +} + +#define TP_DMI_MATCH(vendor,model) { \ + .ident = vendor " " model, \ + .matches = { \ + DMI_MATCH(DMI_BOARD_VENDOR, vendor), \ + DMI_MATCH(DMI_PRODUCT_VERSION, model) \ + } \ +} + +/* Check DMI for existence of ThinkPad embedded controller */ +static int __init check_dmi_for_ec(void) +{ + /* A few old models that have a good EC but don't report it in DMI */ + struct dmi_system_id tp_whitelist[] = { + TP_DMI_MATCH("IBM", "ThinkPad A30"), + TP_DMI_MATCH("IBM", "ThinkPad T23"), + TP_DMI_MATCH("IBM", "ThinkPad X24"), + TP_DMI_MATCH("LENOVO", "ThinkPad"), + { .ident = NULL } + }; + return dmi_find_substring(DMI_DEV_TYPE_OEM_STRING, + "IBM ThinkPad Embedded Controller") || + dmi_check_system(tp_whitelist); +} + +/*** Init and cleanup ***/ + +static int __init thinkpad_ec_init(void) +{ + if (!check_dmi_for_ec()) { + printk(KERN_WARNING + "thinkpad_ec: no ThinkPad embedded controller!\n"); + return -ENODEV; + } + + if (request_region(TPC_BASE_PORT, TPC_NUM_PORTS, "thinkpad_ec")) { + reserved_io = 1; + } else { + printk(KERN_ERR "thinkpad_ec: cannot claim IO ports %#x-%#x... ", + TPC_BASE_PORT, + TPC_BASE_PORT + TPC_NUM_PORTS - 1); + if (force_io) { + printk("forcing use of unreserved IO ports.\n"); + } else { + printk("consider using force_io=1.\n"); + return -ENXIO; + } + } + prefetch_jiffies = TPC_PREFETCH_JUNK; + if (thinkpad_ec_test()) { + printk(KERN_ERR "thinkpad_ec: initial ec test failed\n"); + if (reserved_io) + release_region(TPC_BASE_PORT, TPC_NUM_PORTS); + return -ENXIO; + } + printk(KERN_INFO "thinkpad_ec: thinkpad_ec " TP_VERSION " loaded.\n"); + return 0; +} + +static void __exit thinkpad_ec_exit(void) +{ + if (reserved_io) + release_region(TPC_BASE_PORT, TPC_NUM_PORTS); + printk(KERN_INFO "thinkpad_ec: unloaded.\n"); +} + +module_init(thinkpad_ec_init); +module_exit(thinkpad_ec_exit); diff --git b/drivers/platform/x86/tp_smapi.c b/drivers/platform/x86/tp_smapi.c new file mode 100644 index 0000000..209cb64 --- /dev/null +++ b/drivers/platform/x86/tp_smapi.c @@ -0,0 +1,1493 @@ +/* + * tp_smapi.c - ThinkPad SMAPI support + * + * This driver exposes some features of the System Management Application + * Program Interface (SMAPI) BIOS found on ThinkPad laptops. It works on + * models in which the SMAPI BIOS runs in SMM and is invoked by writing + * to the APM control port 0xB2. + * It also exposes battery status information, obtained from the ThinkPad + * embedded controller (via the thinkpad_ec module). + * Ancient ThinkPad models use a different interface, supported by the + * "thinkpad" module from "tpctl". + * + * Many of the battery status values obtained from the EC simply mirror + * values provided by the battery's Smart Battery System (SBS) interface, so + * their meaning is defined by the Smart Battery Data Specification (see + * http://sbs-forum.org/specs/sbdat110.pdf). References to this SBS spec + * are given in the code where relevant. + * + * Copyright (C) 2006 Shem Multinymous . + * SMAPI access code based on the mwave driver by Mike Sullivan. + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + */ + +#include +#include +#include +#include +#include +#include /* CMOS defines */ +#include +#include +#include +#include +#include +#include + +#define TP_VERSION "0.42" +#define TP_DESC "ThinkPad SMAPI Support" +#define TP_DIR "smapi" + +MODULE_AUTHOR("Shem Multinymous"); +MODULE_DESCRIPTION(TP_DESC); +MODULE_VERSION(TP_VERSION); +MODULE_LICENSE("GPL"); + +static struct platform_device *pdev; + +static int tp_debug; +module_param_named(debug, tp_debug, int, 0600); +MODULE_PARM_DESC(debug, "Debug level (0=off, 1=on)"); + +/* A few macros for printk()ing: */ +#define TPRINTK(level, fmt, args...) \ + dev_printk(level, &(pdev->dev), "%s: " fmt "\n", __func__, ## args) +#define DPRINTK(fmt, args...) \ + do { if (tp_debug) TPRINTK(KERN_DEBUG, fmt, ## args); } while (0) + +/********************************************************************* + * SMAPI interface + */ + +/* SMAPI functions (register BX when making the SMM call). */ +#define SMAPI_GET_INHIBIT_CHARGE 0x2114 +#define SMAPI_SET_INHIBIT_CHARGE 0x2115 +#define SMAPI_GET_THRESH_START 0x2116 +#define SMAPI_SET_THRESH_START 0x2117 +#define SMAPI_GET_FORCE_DISCHARGE 0x2118 +#define SMAPI_SET_FORCE_DISCHARGE 0x2119 +#define SMAPI_GET_THRESH_STOP 0x211a +#define SMAPI_SET_THRESH_STOP 0x211b + +/* SMAPI error codes (see ThinkPad 770 Technical Reference Manual p.83 at + http://www-307.ibm.com/pc/support/site.wss/document.do?lndocid=PFAN-3TUQQD */ +#define SMAPI_RETCODE_EOF 0xff +static struct { u8 rc; char *msg; int ret; } smapi_retcode[] = +{ + {0x00, "OK", 0}, + {0x53, "SMAPI function is not available", -ENXIO}, + {0x81, "Invalid parameter", -EINVAL}, + {0x86, "Function is not supported by SMAPI BIOS", -EOPNOTSUPP}, + {0x90, "System error", -EIO}, + {0x91, "System is invalid", -EIO}, + {0x92, "System is busy, -EBUSY"}, + {0xa0, "Device error (disk read error)", -EIO}, + {0xa1, "Device is busy", -EBUSY}, + {0xa2, "Device is not attached", -ENXIO}, + {0xa3, "Device is disbled", -EIO}, + {0xa4, "Request parameter is out of range", -EINVAL}, + {0xa5, "Request parameter is not accepted", -EINVAL}, + {0xa6, "Transient error", -EBUSY}, /* ? */ + {SMAPI_RETCODE_EOF, "Unknown error code", -EIO} +}; + + +#define SMAPI_MAX_RETRIES 10 +#define SMAPI_PORT2 0x4F /* fixed port, meaning unclear */ +static unsigned short smapi_port; /* APM control port, normally 0xB2 */ + +#if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,37) +static DECLARE_MUTEX(smapi_mutex); +#else +static DEFINE_SEMAPHORE(smapi_mutex); +#endif + +/** + * find_smapi_port - read SMAPI port from NVRAM + */ +static int __init find_smapi_port(void) +{ + u16 smapi_id = 0; + unsigned short port = 0; + unsigned long flags; + + spin_lock_irqsave(&rtc_lock, flags); + smapi_id = CMOS_READ(0x7C); + smapi_id |= (CMOS_READ(0x7D) << 8); + spin_unlock_irqrestore(&rtc_lock, flags); + + if (smapi_id != 0x5349) { + printk(KERN_ERR "SMAPI not supported (ID=0x%x)\n", smapi_id); + return -ENXIO; + } + spin_lock_irqsave(&rtc_lock, flags); + port = CMOS_READ(0x7E); + port |= (CMOS_READ(0x7F) << 8); + spin_unlock_irqrestore(&rtc_lock, flags); + if (port == 0) { + printk(KERN_ERR "unable to read SMAPI port number\n"); + return -ENXIO; + } + return port; +} + +/** + * smapi_request - make a SMAPI call + * @inEBX, @inECX, @inEDI, @inESI: input registers + * @outEBX, @outECX, @outEDX, @outEDI, @outESI: outputs registers + * @msg: textual error message + * Invokes the SMAPI SMBIOS with the given input and outpu args. + * All outputs are optional (can be %NULL). + * Returns 0 when successful, and a negative errno constant + * (see smapi_retcode above) upon failure. + */ +static int smapi_request(u32 inEBX, u32 inECX, + u32 inEDI, u32 inESI, + u32 *outEBX, u32 *outECX, u32 *outEDX, + u32 *outEDI, u32 *outESI, const char **msg) +{ + int ret = 0; + int i; + int retries; + u8 rc; + /* Must use local vars for output regs, due to reg pressure. */ + u32 tmpEAX, tmpEBX, tmpECX, tmpEDX, tmpEDI, tmpESI; + + for (retries = 0; retries < SMAPI_MAX_RETRIES; ++retries) { + DPRINTK("req_in: BX=%x CX=%x DI=%x SI=%x", + inEBX, inECX, inEDI, inESI); + + /* SMAPI's SMBIOS call and thinkpad_ec end up using use + * different interfaces to the same chip, so play it safe. */ + ret = thinkpad_ec_lock(); + if (ret) + return ret; + + __asm__ __volatile__( + "movl $0x00005380,%%eax\n\t" + "movl %6,%%ebx\n\t" + "movl %7,%%ecx\n\t" + "movl %8,%%edi\n\t" + "movl %9,%%esi\n\t" + "xorl %%edx,%%edx\n\t" + "movw %10,%%dx\n\t" + "out %%al,%%dx\n\t" /* trigger SMI to SMBIOS */ + "out %%al,$0x4F\n\t" + "movl %%eax,%0\n\t" + "movl %%ebx,%1\n\t" + "movl %%ecx,%2\n\t" + "movl %%edx,%3\n\t" + "movl %%edi,%4\n\t" + "movl %%esi,%5\n\t" + :"=m"(tmpEAX), + "=m"(tmpEBX), + "=m"(tmpECX), + "=m"(tmpEDX), + "=m"(tmpEDI), + "=m"(tmpESI) + :"m"(inEBX), "m"(inECX), "m"(inEDI), "m"(inESI), + "m"((u16)smapi_port) + :"%eax", "%ebx", "%ecx", "%edx", "%edi", + "%esi"); + + thinkpad_ec_invalidate(); + thinkpad_ec_unlock(); + + /* Don't let the next SMAPI access happen too quickly, + * may case problems. (We're hold smapi_mutex). */ + msleep(50); + + if (outEBX) *outEBX = tmpEBX; + if (outECX) *outECX = tmpECX; + if (outEDX) *outEDX = tmpEDX; + if (outESI) *outESI = tmpESI; + if (outEDI) *outEDI = tmpEDI; + + /* Look up error code */ + rc = (tmpEAX>>8)&0xFF; + for (i = 0; smapi_retcode[i].rc != SMAPI_RETCODE_EOF && + smapi_retcode[i].rc != rc; ++i) {} + ret = smapi_retcode[i].ret; + if (msg) + *msg = smapi_retcode[i].msg; + + DPRINTK("req_out: AX=%x BX=%x CX=%x DX=%x DI=%x SI=%x r=%d", + tmpEAX, tmpEBX, tmpECX, tmpEDX, tmpEDI, tmpESI, ret); + if (ret) + TPRINTK(KERN_NOTICE, "SMAPI error: %s (func=%x)", + smapi_retcode[i].msg, inEBX); + + if (ret != -EBUSY) + return ret; + } + return ret; +} + +/* Convenience wrapper: discard output arguments */ +static int smapi_write(u32 inEBX, u32 inECX, + u32 inEDI, u32 inESI, const char **msg) +{ + return smapi_request(inEBX, inECX, inEDI, inESI, + NULL, NULL, NULL, NULL, NULL, msg); +} + + +/********************************************************************* + * Specific SMAPI services + * All of these functions return 0 upon success, and a negative errno + * constant (see smapi_retcode) on failure. + */ + +enum thresh_type { + THRESH_STOP = 0, /* the code assumes this is 0 for brevity */ + THRESH_START +}; +#define THRESH_NAME(which) ((which == THRESH_START) ? "start" : "stop") + +/** + * __get_real_thresh - read battery charge start/stop threshold from SMAPI + * @bat: battery number (0 or 1) + * @which: THRESH_START or THRESH_STOP + * @thresh: 1..99, 0=default 1..99, 0=default (pass this as-is to SMAPI) + * @outEDI: some additional state that needs to be preserved, meaning unknown + * @outESI: some additional state that needs to be preserved, meaning unknown + */ +static int __get_real_thresh(int bat, enum thresh_type which, int *thresh, + u32 *outEDI, u32 *outESI) +{ + u32 ebx = (which == THRESH_START) ? SMAPI_GET_THRESH_START + : SMAPI_GET_THRESH_STOP; + u32 ecx = (bat+1)<<8; + const char *msg; + int ret = smapi_request(ebx, ecx, 0, 0, NULL, + &ecx, NULL, outEDI, outESI, &msg); + if (ret) { + TPRINTK(KERN_NOTICE, "cannot get %s_thresh of bat=%d: %s", + THRESH_NAME(which), bat, msg); + return ret; + } + if (!(ecx&0x00000100)) { + TPRINTK(KERN_NOTICE, "cannot get %s_thresh of bat=%d: ecx=0%x", + THRESH_NAME(which), bat, ecx); + return -EIO; + } + if (thresh) + *thresh = ecx&0xFF; + return 0; +} + +/** + * get_real_thresh - read battery charge start/stop threshold from SMAPI + * @bat: battery number (0 or 1) + * @which: THRESH_START or THRESH_STOP + * @thresh: 1..99, 0=default (passes as-is to SMAPI) + */ +static int get_real_thresh(int bat, enum thresh_type which, int *thresh) +{ + return __get_real_thresh(bat, which, thresh, NULL, NULL); +} + +/** + * set_real_thresh - write battery start/top charge threshold to SMAPI + * @bat: battery number (0 or 1) + * @which: THRESH_START or THRESH_STOP + * @thresh: 1..99, 0=default (passes as-is to SMAPI) + */ +static int set_real_thresh(int bat, enum thresh_type which, int thresh) +{ + u32 ebx = (which == THRESH_START) ? SMAPI_SET_THRESH_START + : SMAPI_SET_THRESH_STOP; + u32 ecx = ((bat+1)<<8) + thresh; + u32 getDI, getSI; + const char *msg; + int ret; + + /* verify read before writing */ + ret = __get_real_thresh(bat, which, NULL, &getDI, &getSI); + if (ret) + return ret; + + ret = smapi_write(ebx, ecx, getDI, getSI, &msg); + if (ret) + TPRINTK(KERN_NOTICE, "set %s to %d for bat=%d failed: %s", + THRESH_NAME(which), thresh, bat, msg); + else + TPRINTK(KERN_INFO, "set %s to %d for bat=%d", + THRESH_NAME(which), thresh, bat); + return ret; +} + +/** + * __get_inhibit_charge_minutes - get inhibit charge period from SMAPI + * @bat: battery number (0 or 1) + * @minutes: period in minutes (1..65535 minutes, 0=disabled) + * @outECX: some additional state that needs to be preserved, meaning unknown + * Note that @minutes is the originally set value, it does not count down. + */ +static int __get_inhibit_charge_minutes(int bat, int *minutes, u32 *outECX) +{ + u32 ecx = (bat+1)<<8; + u32 esi; + const char *msg; + int ret = smapi_request(SMAPI_GET_INHIBIT_CHARGE, ecx, 0, 0, + NULL, &ecx, NULL, NULL, &esi, &msg); + if (ret) { + TPRINTK(KERN_NOTICE, "failed for bat=%d: %s", bat, msg); + return ret; + } + if (!(ecx&0x0100)) { + TPRINTK(KERN_NOTICE, "bad ecx=0x%x for bat=%d", ecx, bat); + return -EIO; + } + if (minutes) + *minutes = (ecx&0x0001)?esi:0; + if (outECX) + *outECX = ecx; + return 0; +} + +/** + * get_inhibit_charge_minutes - get inhibit charge period from SMAPI + * @bat: battery number (0 or 1) + * @minutes: period in minutes (1..65535 minutes, 0=disabled) + * Note that @minutes is the originally set value, it does not count down. + */ +static int get_inhibit_charge_minutes(int bat, int *minutes) +{ + return __get_inhibit_charge_minutes(bat, minutes, NULL); +} + +/** + * set_inhibit_charge_minutes - write inhibit charge period to SMAPI + * @bat: battery number (0 or 1) + * @minutes: period in minutes (1..65535 minutes, 0=disabled) + */ +static int set_inhibit_charge_minutes(int bat, int minutes) +{ + u32 ecx; + const char *msg; + int ret; + + /* verify read before writing */ + ret = __get_inhibit_charge_minutes(bat, NULL, &ecx); + if (ret) + return ret; + + ecx = ((bat+1)<<8) | (ecx&0x00FE) | (minutes > 0 ? 0x0001 : 0x0000); + if (minutes > 0xFFFF) + minutes = 0xFFFF; + ret = smapi_write(SMAPI_SET_INHIBIT_CHARGE, ecx, 0, minutes, &msg); + if (ret) + TPRINTK(KERN_NOTICE, + "set to %d failed for bat=%d: %s", minutes, bat, msg); + else + TPRINTK(KERN_INFO, "set to %d for bat=%d\n", minutes, bat); + return ret; +} + + +/** + * get_force_discharge - get status of forced discharging from SMAPI + * @bat: battery number (0 or 1) + * @enabled: 1 if forced discharged is enabled, 0 if not + */ +static int get_force_discharge(int bat, int *enabled) +{ + u32 ecx = (bat+1)<<8; + const char *msg; + int ret = smapi_request(SMAPI_GET_FORCE_DISCHARGE, ecx, 0, 0, + NULL, &ecx, NULL, NULL, NULL, &msg); + if (ret) { + TPRINTK(KERN_NOTICE, "failed for bat=%d: %s", bat, msg); + return ret; + } + *enabled = (!(ecx&0x00000100) && (ecx&0x00000001))?1:0; + return 0; +} + +/** + * set_force_discharge - write status of forced discharging to SMAPI + * @bat: battery number (0 or 1) + * @enabled: 1 if forced discharged is enabled, 0 if not + */ +static int set_force_discharge(int bat, int enabled) +{ + u32 ecx = (bat+1)<<8; + const char *msg; + int ret = smapi_request(SMAPI_GET_FORCE_DISCHARGE, ecx, 0, 0, + NULL, &ecx, NULL, NULL, NULL, &msg); + if (ret) { + TPRINTK(KERN_NOTICE, "get failed for bat=%d: %s", bat, msg); + return ret; + } + if (ecx&0x00000100) { + TPRINTK(KERN_NOTICE, "cannot force discharge bat=%d", bat); + return -EIO; + } + + ecx = ((bat+1)<<8) | (ecx&0x000000FA) | (enabled?0x00000001:0); + ret = smapi_write(SMAPI_SET_FORCE_DISCHARGE, ecx, 0, 0, &msg); + if (ret) + TPRINTK(KERN_NOTICE, "set to %d failed for bat=%d: %s", + enabled, bat, msg); + else + TPRINTK(KERN_INFO, "set to %d for bat=%d", enabled, bat); + return ret; +} + + +/********************************************************************* + * Wrappers to threshold-related SMAPI functions, which handle default + * thresholds and related quirks. + */ + +/* Minimum, default and minimum difference for battery charging thresholds: */ +#define MIN_THRESH_DELTA 4 /* Min delta between start and stop thresh */ +#define MIN_THRESH_START 2 +#define MAX_THRESH_START (100-MIN_THRESH_DELTA) +#define MIN_THRESH_STOP (MIN_THRESH_START + MIN_THRESH_DELTA) +#define MAX_THRESH_STOP 100 +#define DEFAULT_THRESH_START MAX_THRESH_START +#define DEFAULT_THRESH_STOP MAX_THRESH_STOP + +/* The GUI of IBM's Battery Maximizer seems to show a start threshold that + * is 1 more than the value we set/get via SMAPI. Since the threshold is + * maintained across reboot, this can be confusing. So we kludge our + * interface for interoperability: */ +#define BATMAX_FIX 1 + +/* Get charge start/stop threshold (1..100), + * substituting default values if needed and applying BATMAT_FIX. */ +static int get_thresh(int bat, enum thresh_type which, int *thresh) +{ + int ret = get_real_thresh(bat, which, thresh); + if (ret) + return ret; + if (*thresh == 0) + *thresh = (which == THRESH_START) ? DEFAULT_THRESH_START + : DEFAULT_THRESH_STOP; + else if (which == THRESH_START) + *thresh += BATMAX_FIX; + return 0; +} + + +/* Set charge start/stop threshold (1..100), + * substituting default values if needed and applying BATMAT_FIX. */ +static int set_thresh(int bat, enum thresh_type which, int thresh) +{ + if (which == THRESH_STOP && thresh == DEFAULT_THRESH_STOP) + thresh = 0; /* 100 is out of range, but default means 100 */ + if (which == THRESH_START) + thresh -= BATMAX_FIX; + return set_real_thresh(bat, which, thresh); +} + +/********************************************************************* + * ThinkPad embedded controller readout and basic functions + */ + +/** + * read_tp_ec_row - read data row from the ThinkPad embedded controller + * @arg0: EC command code + * @bat: battery number, 0 or 1 + * @j: the byte value to be used for "junk" (unused) input/outputs + * @dataval: result vector + */ +static int read_tp_ec_row(u8 arg0, int bat, u8 j, u8 *dataval) +{ + int ret; + const struct thinkpad_ec_row args = { .mask = 0xFFFF, + .val = {arg0, j,j,j,j,j,j,j,j,j,j,j,j,j,j, (u8)bat} }; + struct thinkpad_ec_row data = { .mask = 0xFFFF }; + + ret = thinkpad_ec_lock(); + if (ret) + return ret; + ret = thinkpad_ec_read_row(&args, &data); + thinkpad_ec_unlock(); + memcpy(dataval, &data.val, TP_CONTROLLER_ROW_LEN); + return ret; +} + +/** + * power_device_present - check for presence of battery or AC power + * @bat: 0 for battery 0, 1 for battery 1, otherwise AC power + * Returns 1 if present, 0 if not present, negative if error. + */ +static int power_device_present(int bat) +{ + u8 row[TP_CONTROLLER_ROW_LEN]; + u8 test; + int ret = read_tp_ec_row(1, bat, 0, row); + if (ret) + return ret; + switch (bat) { + case 0: test = 0x40; break; /* battery 0 */ + case 1: test = 0x20; break; /* battery 1 */ + default: test = 0x80; /* AC power */ + } + return (row[0] & test) ? 1 : 0; +} + +/** + * bat_has_status - check if battery can report detailed status + * @bat: 0 for battery 0, 1 for battery 1 + * Returns 1 if yes, 0 if no, negative if error. + */ +static int bat_has_status(int bat) +{ + u8 row[TP_CONTROLLER_ROW_LEN]; + int ret = read_tp_ec_row(1, bat, 0, row); + if (ret) + return ret; + if ((row[0] & (bat?0x20:0x40)) == 0) /* no battery */ + return 0; + if ((row[1] & (0x60)) == 0) /* no status */ + return 0; + return 1; +} + +/** + * get_tp_ec_bat_16 - read a 16-bit value from EC battery status data + * @arg0: first argument to EC + * @off: offset in row returned from EC + * @bat: battery (0 or 1) + * @val: the 16-bit value obtained + * Returns nonzero on error. + */ +static int get_tp_ec_bat_16(u8 arg0, int offset, int bat, u16 *val) +{ + u8 row[TP_CONTROLLER_ROW_LEN]; + int ret; + if (bat_has_status(bat) != 1) + return -ENXIO; + ret = read_tp_ec_row(arg0, bat, 0, row); + if (ret) + return ret; + *val = *(u16 *)(row+offset); + return 0; +} + +/********************************************************************* + * sysfs attributes for batteries - + * definitions and helper functions + */ + +/* A custom device attribute struct which holds a battery number */ +struct bat_device_attribute { + struct device_attribute dev_attr; + int bat; +}; + +/** + * attr_get_bat - get the battery to which the attribute belongs + */ +static int attr_get_bat(struct device_attribute *attr) +{ + return container_of(attr, struct bat_device_attribute, dev_attr)->bat; +} + +/** + * show_tp_ec_bat_u16 - show an unsigned 16-bit battery attribute + * @arg0: specified 1st argument of EC raw to read + * @offset: byte offset in EC raw data + * @mul: correction factor to multiply by + * @na_msg: string to output is value not available (0xFFFFFFFF) + * @attr: battery attribute + * @buf: output buffer + * The 16-bit value is read from the EC, treated as unsigned, + * transformed as x->mul*x, and printed to the buffer. + * If the value is 0xFFFFFFFF and na_msg!=%NULL, na_msg is printed instead. + */ +static ssize_t show_tp_ec_bat_u16(u8 arg0, int offset, int mul, + const char *na_msg, + struct device_attribute *attr, char *buf) +{ + u16 val; + int ret = get_tp_ec_bat_16(arg0, offset, attr_get_bat(attr), &val); + if (ret) + return ret; + if (na_msg && val == 0xFFFF) + return sprintf(buf, "%s\n", na_msg); + else + return sprintf(buf, "%u\n", mul*(unsigned int)val); +} + +/** + * show_tp_ec_bat_s16 - show an signed 16-bit battery attribute + * @arg0: specified 1st argument of EC raw to read + * @offset: byte offset in EC raw data + * @mul: correction factor to multiply by + * @add: correction term to add after multiplication + * @attr: battery attribute + * @buf: output buffer + * The 16-bit value is read from the EC, treated as signed, + * transformed as x->mul*x+add, and printed to the buffer. + */ +static ssize_t show_tp_ec_bat_s16(u8 arg0, int offset, int mul, int add, + struct device_attribute *attr, char *buf) +{ + u16 val; + int ret = get_tp_ec_bat_16(arg0, offset, attr_get_bat(attr), &val); + if (ret) + return ret; + return sprintf(buf, "%d\n", mul*(s16)val+add); +} + +/** + * show_tp_ec_bat_str - show a string from EC battery status data + * @arg0: specified 1st argument of EC raw to read + * @offset: byte offset in EC raw data + * @maxlen: maximum string length + * @attr: battery attribute + * @buf: output buffer + */ +static ssize_t show_tp_ec_bat_str(u8 arg0, int offset, int maxlen, + struct device_attribute *attr, char *buf) +{ + int bat = attr_get_bat(attr); + u8 row[TP_CONTROLLER_ROW_LEN]; + int ret; + if (bat_has_status(bat) != 1) + return -ENXIO; + ret = read_tp_ec_row(arg0, bat, 0, row); + if (ret) + return ret; + strncpy(buf, (char *)row+offset, maxlen); + buf[maxlen] = 0; + strcat(buf, "\n"); + return strlen(buf); +} + +/** + * show_tp_ec_bat_power - show a power readout from EC battery status data + * @arg0: specified 1st argument of EC raw to read + * @offV: byte offset of voltage in EC raw data + * @offI: byte offset of current in EC raw data + * @attr: battery attribute + * @buf: output buffer + * Computes the power as current*voltage from the two given readout offsets. + */ +static ssize_t show_tp_ec_bat_power(u8 arg0, int offV, int offI, + struct device_attribute *attr, char *buf) +{ + u8 row[TP_CONTROLLER_ROW_LEN]; + int milliamp, millivolt, ret; + int bat = attr_get_bat(attr); + if (bat_has_status(bat) != 1) + return -ENXIO; + ret = read_tp_ec_row(1, bat, 0, row); + if (ret) + return ret; + millivolt = *(u16 *)(row+offV); + milliamp = *(s16 *)(row+offI); + return sprintf(buf, "%d\n", milliamp*millivolt/1000); /* units: mW */ +} + +/** + * show_tp_ec_bat_date - decode and show a date from EC battery status data + * @arg0: specified 1st argument of EC raw to read + * @offset: byte offset in EC raw data + * @attr: battery attribute + * @buf: output buffer + */ +static ssize_t show_tp_ec_bat_date(u8 arg0, int offset, + struct device_attribute *attr, char *buf) +{ + u8 row[TP_CONTROLLER_ROW_LEN]; + u16 v; + int ret; + int day, month, year; + int bat = attr_get_bat(attr); + if (bat_has_status(bat) != 1) + return -ENXIO; + ret = read_tp_ec_row(arg0, bat, 0, row); + if (ret) + return ret; + + /* Decode bit-packed: v = day | (month<<5) | ((year-1980)<<9) */ + v = *(u16 *)(row+offset); + day = v & 0x1F; + month = (v >> 5) & 0xF; + year = (v >> 9) + 1980; + + return sprintf(buf, "%04d-%02d-%02d\n", year, month, day); +} + + +/********************************************************************* + * sysfs attribute I/O for batteries - + * the actual attribute show/store functions + */ + +static ssize_t show_battery_start_charge_thresh(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int thresh; + int bat = attr_get_bat(attr); + int ret = get_thresh(bat, THRESH_START, &thresh); + if (ret) + return ret; + return sprintf(buf, "%d\n", thresh); /* units: percent */ +} + +static ssize_t show_battery_stop_charge_thresh(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int thresh; + int bat = attr_get_bat(attr); + int ret = get_thresh(bat, THRESH_STOP, &thresh); + if (ret) + return ret; + return sprintf(buf, "%d\n", thresh); /* units: percent */ +} + +/** + * store_battery_start_charge_thresh - store battery_start_charge_thresh attr + * Since this is a kernel<->user interface, we ensure a valid state for + * the hardware. We do this by clamping the requested threshold to the + * valid range and, if necessary, moving the other threshold so that + * it's MIN_THRESH_DELTA away from this one. + */ +static ssize_t store_battery_start_charge_thresh(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + int thresh, other_thresh, ret; + int bat = attr_get_bat(attr); + + if (sscanf(buf, "%d", &thresh) != 1 || thresh < 1 || thresh > 100) + return -EINVAL; + + if (thresh < MIN_THRESH_START) /* clamp up to MIN_THRESH_START */ + thresh = MIN_THRESH_START; + if (thresh > MAX_THRESH_START) /* clamp down to MAX_THRESH_START */ + thresh = MAX_THRESH_START; + + down(&smapi_mutex); + ret = get_thresh(bat, THRESH_STOP, &other_thresh); + if (ret != -EOPNOTSUPP && ret != -ENXIO) { + if (ret) /* other threshold is set? */ + goto out; + ret = get_real_thresh(bat, THRESH_START, NULL); + if (ret) /* this threshold is set? */ + goto out; + if (other_thresh < thresh+MIN_THRESH_DELTA) { + /* move other thresh to keep it above this one */ + ret = set_thresh(bat, THRESH_STOP, + thresh+MIN_THRESH_DELTA); + if (ret) + goto out; + } + } + ret = set_thresh(bat, THRESH_START, thresh); +out: + up(&smapi_mutex); + return count; + +} + +/** + * store_battery_stop_charge_thresh - store battery_stop_charge_thresh attr + * Since this is a kernel<->user interface, we ensure a valid state for + * the hardware. We do this by clamping the requested threshold to the + * valid range and, if necessary, moving the other threshold so that + * it's MIN_THRESH_DELTA away from this one. + */ +static ssize_t store_battery_stop_charge_thresh(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + int thresh, other_thresh, ret; + int bat = attr_get_bat(attr); + + if (sscanf(buf, "%d", &thresh) != 1 || thresh < 1 || thresh > 100) + return -EINVAL; + + if (thresh < MIN_THRESH_STOP) /* clamp up to MIN_THRESH_STOP */ + thresh = MIN_THRESH_STOP; + + down(&smapi_mutex); + ret = get_thresh(bat, THRESH_START, &other_thresh); + if (ret != -EOPNOTSUPP && ret != -ENXIO) { /* other threshold exists? */ + if (ret) + goto out; + /* this threshold exists? */ + ret = get_real_thresh(bat, THRESH_STOP, NULL); + if (ret) + goto out; + if (other_thresh >= thresh-MIN_THRESH_DELTA) { + /* move other thresh to be below this one */ + ret = set_thresh(bat, THRESH_START, + thresh-MIN_THRESH_DELTA); + if (ret) + goto out; + } + } + ret = set_thresh(bat, THRESH_STOP, thresh); +out: + up(&smapi_mutex); + return count; +} + +static ssize_t show_battery_inhibit_charge_minutes(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int minutes; + int bat = attr_get_bat(attr); + int ret = get_inhibit_charge_minutes(bat, &minutes); + if (ret) + return ret; + return sprintf(buf, "%d\n", minutes); /* units: minutes */ +} + +static ssize_t store_battery_inhibit_charge_minutes(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + int ret; + int minutes; + int bat = attr_get_bat(attr); + if (sscanf(buf, "%d", &minutes) != 1 || minutes < 0) { + TPRINTK(KERN_ERR, "inhibit_charge_minutes: " + "must be a non-negative integer"); + return -EINVAL; + } + ret = set_inhibit_charge_minutes(bat, minutes); + if (ret) + return ret; + return count; +} + +static ssize_t show_battery_force_discharge(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int enabled; + int bat = attr_get_bat(attr); + int ret = get_force_discharge(bat, &enabled); + if (ret) + return ret; + return sprintf(buf, "%d\n", enabled); /* type: boolean */ +} + +static ssize_t store_battery_force_discharge(struct device *dev, + struct device_attribute *attr, const char *buf, size_t count) +{ + int ret; + int enabled; + int bat = attr_get_bat(attr); + if (sscanf(buf, "%d", &enabled) != 1 || enabled < 0 || enabled > 1) + return -EINVAL; + ret = set_force_discharge(bat, enabled); + if (ret) + return ret; + return count; +} + +static ssize_t show_battery_installed( + struct device *dev, struct device_attribute *attr, char *buf) +{ + int bat = attr_get_bat(attr); + int ret = power_device_present(bat); + if (ret < 0) + return ret; + return sprintf(buf, "%d\n", ret); /* type: boolean */ +} + +static ssize_t show_battery_state( + struct device *dev, struct device_attribute *attr, char *buf) +{ + u8 row[TP_CONTROLLER_ROW_LEN]; + const char *txt; + int ret; + int bat = attr_get_bat(attr); + if (bat_has_status(bat) != 1) + return sprintf(buf, "none\n"); + ret = read_tp_ec_row(1, bat, 0, row); + if (ret) + return ret; + switch (row[1] & 0xf0) { + case 0xc0: txt = "idle"; break; + case 0xd0: txt = "discharging"; break; + case 0xe0: txt = "charging"; break; + default: return sprintf(buf, "unknown (0x%x)\n", row[1]); + } + return sprintf(buf, "%s\n", txt); /* type: string from fixed set */ +} + +static ssize_t show_battery_manufacturer( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* type: string. SBS spec v1.1 p34: ManufacturerName() */ + return show_tp_ec_bat_str(4, 2, TP_CONTROLLER_ROW_LEN-2, attr, buf); +} + +static ssize_t show_battery_model( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* type: string. SBS spec v1.1 p34: DeviceName() */ + return show_tp_ec_bat_str(5, 2, TP_CONTROLLER_ROW_LEN-2, attr, buf); +} + +static ssize_t show_battery_barcoding( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* type: string */ + return show_tp_ec_bat_str(7, 2, TP_CONTROLLER_ROW_LEN-2, attr, buf); +} + +static ssize_t show_battery_chemistry( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* type: string. SBS spec v1.1 p34-35: DeviceChemistry() */ + return show_tp_ec_bat_str(6, 2, 5, attr, buf); +} + +static ssize_t show_battery_voltage( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mV. SBS spec v1.1 p24: Voltage() */ + return show_tp_ec_bat_u16(1, 6, 1, NULL, attr, buf); +} + +static ssize_t show_battery_design_voltage( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mV. SBS spec v1.1 p32: DesignVoltage() */ + return show_tp_ec_bat_u16(3, 4, 1, NULL, attr, buf); +} + +static ssize_t show_battery_charging_max_voltage( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mV. SBS spec v1.1 p37,39: ChargingVoltage() */ + return show_tp_ec_bat_u16(9, 8, 1, NULL, attr, buf); +} + +static ssize_t show_battery_group0_voltage( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mV */ + return show_tp_ec_bat_u16(0xA, 12, 1, NULL, attr, buf); +} + +static ssize_t show_battery_group1_voltage( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mV */ + return show_tp_ec_bat_u16(0xA, 10, 1, NULL, attr, buf); +} + +static ssize_t show_battery_group2_voltage( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mV */ + return show_tp_ec_bat_u16(0xA, 8, 1, NULL, attr, buf); +} + +static ssize_t show_battery_group3_voltage( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mV */ + return show_tp_ec_bat_u16(0xA, 6, 1, NULL, attr, buf); +} + +static ssize_t show_battery_current_now( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mA. SBS spec v1.1 p24: Current() */ + return show_tp_ec_bat_s16(1, 8, 1, 0, attr, buf); +} + +static ssize_t show_battery_current_avg( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mA. SBS spec v1.1 p24: AverageCurrent() */ + return show_tp_ec_bat_s16(1, 10, 1, 0, attr, buf); +} + +static ssize_t show_battery_charging_max_current( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mA. SBS spec v1.1 p36,38: ChargingCurrent() */ + return show_tp_ec_bat_s16(9, 6, 1, 0, attr, buf); +} + +static ssize_t show_battery_power_now( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mW. SBS spec v1.1: Voltage()*Current() */ + return show_tp_ec_bat_power(1, 6, 8, attr, buf); +} + +static ssize_t show_battery_power_avg( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mW. SBS spec v1.1: Voltage()*AverageCurrent() */ + return show_tp_ec_bat_power(1, 6, 10, attr, buf); +} + +static ssize_t show_battery_remaining_percent( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: percent. SBS spec v1.1 p25: RelativeStateOfCharge() */ + return show_tp_ec_bat_u16(1, 12, 1, NULL, attr, buf); +} + +static ssize_t show_battery_remaining_percent_error( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: percent. SBS spec v1.1 p25: MaxError() */ + return show_tp_ec_bat_u16(9, 4, 1, NULL, attr, buf); +} + +static ssize_t show_battery_remaining_charging_time( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: minutes. SBS spec v1.1 p27: AverageTimeToFull() */ + return show_tp_ec_bat_u16(2, 8, 1, "not_charging", attr, buf); +} + +static ssize_t show_battery_remaining_running_time( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: minutes. SBS spec v1.1 p27: RunTimeToEmpty() */ + return show_tp_ec_bat_u16(2, 6, 1, "not_discharging", attr, buf); +} + +static ssize_t show_battery_remaining_running_time_now( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: minutes. SBS spec v1.1 p27: RunTimeToEmpty() */ + return show_tp_ec_bat_u16(2, 4, 1, "not_discharging", attr, buf); +} + +static ssize_t show_battery_remaining_capacity( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mWh. SBS spec v1.1 p26. */ + return show_tp_ec_bat_u16(1, 14, 10, "", attr, buf); +} + +static ssize_t show_battery_last_full_capacity( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mWh. SBS spec v1.1 p26: FullChargeCapacity() */ + return show_tp_ec_bat_u16(2, 2, 10, "", attr, buf); +} + +static ssize_t show_battery_design_capacity( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: mWh. SBS spec v1.1 p32: DesignCapacity() */ + return show_tp_ec_bat_u16(3, 2, 10, "", attr, buf); +} + +static ssize_t show_battery_cycle_count( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: ordinal. SBS spec v1.1 p32: CycleCount() */ + return show_tp_ec_bat_u16(2, 12, 1, "", attr, buf); +} + +static ssize_t show_battery_temperature( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* units: millicelsius. SBS spec v1.1: Temperature()*10 */ + return show_tp_ec_bat_s16(1, 4, 100, -273100, attr, buf); +} + +static ssize_t show_battery_serial( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* type: int. SBS spec v1.1 p34: SerialNumber() */ + return show_tp_ec_bat_u16(3, 10, 1, "", attr, buf); +} + +static ssize_t show_battery_manufacture_date( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* type: YYYY-MM-DD. SBS spec v1.1 p34: ManufactureDate() */ + return show_tp_ec_bat_date(3, 8, attr, buf); +} + +static ssize_t show_battery_first_use_date( + struct device *dev, struct device_attribute *attr, char *buf) +{ + /* type: YYYY-MM-DD */ + return show_tp_ec_bat_date(8, 2, attr, buf); +} + +/** + * show_battery_dump - show the battery's dump attribute + * The dump attribute gives a hex dump of all EC readouts related to a + * battery. Some of the enumerated values don't really exist (i.e., the + * EC function just leaves them untouched); we use a kludge to detect and + * denote these. + */ +#define MIN_DUMP_ARG0 0x00 +#define MAX_DUMP_ARG0 0x0a /* 0x0b is useful too but hangs old EC firmware */ +static ssize_t show_battery_dump( + struct device *dev, struct device_attribute *attr, char *buf) +{ + int i; + char *p = buf; + int bat = attr_get_bat(attr); + u8 arg0; /* first argument to EC */ + u8 rowa[TP_CONTROLLER_ROW_LEN], + rowb[TP_CONTROLLER_ROW_LEN]; + const u8 junka = 0xAA, + junkb = 0x55; /* junk values for testing changes */ + int ret; + + for (arg0 = MIN_DUMP_ARG0; arg0 <= MAX_DUMP_ARG0; ++arg0) { + if ((p-buf) > PAGE_SIZE-TP_CONTROLLER_ROW_LEN*5) + return -ENOMEM; /* don't overflow sysfs buf */ + /* Read raw twice with different junk values, + * to detect unused output bytes which are left unchaged: */ + ret = read_tp_ec_row(arg0, bat, junka, rowa); + if (ret) + return ret; + ret = read_tp_ec_row(arg0, bat, junkb, rowb); + if (ret) + return ret; + for (i = 0; i < TP_CONTROLLER_ROW_LEN; i++) { + if (rowa[i] == junka && rowb[i] == junkb) + p += sprintf(p, "-- "); /* unused by EC */ + else + p += sprintf(p, "%02x ", rowa[i]); + } + p += sprintf(p, "\n"); + } + return p-buf; +} + + +/********************************************************************* + * sysfs attribute I/O, other than batteries + */ + +static ssize_t show_ac_connected( + struct device *dev, struct device_attribute *attr, char *buf) +{ + int ret = power_device_present(0xFF); + if (ret < 0) + return ret; + return sprintf(buf, "%d\n", ret); /* type: boolean */ +} + +/********************************************************************* + * The the "smapi_request" sysfs attribute executes a raw SMAPI call. + * You write to make a request and read to get the result. The state + * is saved globally rather than per fd (sysfs limitation), so + * simultaenous requests may get each other's results! So this is for + * development and debugging only. + */ +#define MAX_SMAPI_ATTR_ANSWER_LEN 128 +static char smapi_attr_answer[MAX_SMAPI_ATTR_ANSWER_LEN] = ""; + +static ssize_t show_smapi_request(struct device *dev, + struct device_attribute *attr, char *buf) +{ + int ret = snprintf(buf, PAGE_SIZE, "%s", smapi_attr_answer); + smapi_attr_answer[0] = '\0'; + return ret; +} + +static ssize_t store_smapi_request(struct device *dev, + struct device_attribute *attr, + const char *buf, size_t count) +{ + unsigned int inEBX, inECX, inEDI, inESI; + u32 outEBX, outECX, outEDX, outEDI, outESI; + const char *msg; + int ret; + if (sscanf(buf, "%x %x %x %x", &inEBX, &inECX, &inEDI, &inESI) != 4) { + smapi_attr_answer[0] = '\0'; + return -EINVAL; + } + ret = smapi_request( + inEBX, inECX, inEDI, inESI, + &outEBX, &outECX, &outEDX, &outEDI, &outESI, &msg); + snprintf(smapi_attr_answer, MAX_SMAPI_ATTR_ANSWER_LEN, + "%x %x %x %x %x %d '%s'\n", + (unsigned int)outEBX, (unsigned int)outECX, + (unsigned int)outEDX, (unsigned int)outEDI, + (unsigned int)outESI, ret, msg); + if (ret) + return ret; + else + return count; +} + +/********************************************************************* + * Power management: the embedded controller forgets the battery + * thresholds when the system is suspended to disk and unplugged from + * AC and battery, so we restore it upon resume. + */ + +static int saved_threshs[4] = {-1, -1, -1, -1}; /* -1 = don't know */ + +static int tp_suspend(struct platform_device *dev, pm_message_t state) +{ + int restore = (state.event == PM_EVENT_HIBERNATE || + state.event == PM_EVENT_FREEZE); + if (!restore || get_real_thresh(0, THRESH_STOP , &saved_threshs[0])) + saved_threshs[0] = -1; + if (!restore || get_real_thresh(0, THRESH_START, &saved_threshs[1])) + saved_threshs[1] = -1; + if (!restore || get_real_thresh(1, THRESH_STOP , &saved_threshs[2])) + saved_threshs[2] = -1; + if (!restore || get_real_thresh(1, THRESH_START, &saved_threshs[3])) + saved_threshs[3] = -1; + DPRINTK("suspend saved: %d %d %d %d", saved_threshs[0], + saved_threshs[1], saved_threshs[2], saved_threshs[3]); + return 0; +} + +static int tp_resume(struct platform_device *dev) +{ + DPRINTK("resume restoring: %d %d %d %d", saved_threshs[0], + saved_threshs[1], saved_threshs[2], saved_threshs[3]); + if (saved_threshs[0] >= 0) + set_real_thresh(0, THRESH_STOP , saved_threshs[0]); + if (saved_threshs[1] >= 0) + set_real_thresh(0, THRESH_START, saved_threshs[1]); + if (saved_threshs[2] >= 0) + set_real_thresh(1, THRESH_STOP , saved_threshs[2]); + if (saved_threshs[3] >= 0) + set_real_thresh(1, THRESH_START, saved_threshs[3]); + return 0; +} + + +/********************************************************************* + * Driver model + */ + +static struct platform_driver tp_driver = { + .suspend = tp_suspend, + .resume = tp_resume, + .driver = { + .name = "smapi", + .owner = THIS_MODULE + }, +}; + + +/********************************************************************* + * Sysfs device model + */ + +/* Attributes in /sys/devices/platform/smapi/ */ + +static DEVICE_ATTR(ac_connected, 0444, show_ac_connected, NULL); +static DEVICE_ATTR(smapi_request, 0600, show_smapi_request, + store_smapi_request); + +static struct attribute *tp_root_attributes[] = { + &dev_attr_ac_connected.attr, + &dev_attr_smapi_request.attr, + NULL +}; +static struct attribute_group tp_root_attribute_group = { + .attrs = tp_root_attributes +}; + +/* Attributes under /sys/devices/platform/smapi/BAT{0,1}/ : + * Every attribute needs to be defined (i.e., statically allocated) for + * each battery, and then referenced in the attribute list of each battery. + * We use preprocessor voodoo to avoid duplicating the list of attributes 4 + * times. The preprocessor output is just normal sysfs attributes code. + */ + +/** + * FOREACH_BAT_ATTR - invoke the given macros on all our battery attributes + * @_BAT: battery number (0 or 1) + * @_ATTR_RW: macro to invoke for each read/write attribute + * @_ATTR_R: macro to invoke for each read-only attribute + */ +#define FOREACH_BAT_ATTR(_BAT, _ATTR_RW, _ATTR_R) \ + _ATTR_RW(_BAT, start_charge_thresh) \ + _ATTR_RW(_BAT, stop_charge_thresh) \ + _ATTR_RW(_BAT, inhibit_charge_minutes) \ + _ATTR_RW(_BAT, force_discharge) \ + _ATTR_R(_BAT, installed) \ + _ATTR_R(_BAT, state) \ + _ATTR_R(_BAT, manufacturer) \ + _ATTR_R(_BAT, model) \ + _ATTR_R(_BAT, barcoding) \ + _ATTR_R(_BAT, chemistry) \ + _ATTR_R(_BAT, voltage) \ + _ATTR_R(_BAT, group0_voltage) \ + _ATTR_R(_BAT, group1_voltage) \ + _ATTR_R(_BAT, group2_voltage) \ + _ATTR_R(_BAT, group3_voltage) \ + _ATTR_R(_BAT, current_now) \ + _ATTR_R(_BAT, current_avg) \ + _ATTR_R(_BAT, charging_max_current) \ + _ATTR_R(_BAT, power_now) \ + _ATTR_R(_BAT, power_avg) \ + _ATTR_R(_BAT, remaining_percent) \ + _ATTR_R(_BAT, remaining_percent_error) \ + _ATTR_R(_BAT, remaining_charging_time) \ + _ATTR_R(_BAT, remaining_running_time) \ + _ATTR_R(_BAT, remaining_running_time_now) \ + _ATTR_R(_BAT, remaining_capacity) \ + _ATTR_R(_BAT, last_full_capacity) \ + _ATTR_R(_BAT, design_voltage) \ + _ATTR_R(_BAT, charging_max_voltage) \ + _ATTR_R(_BAT, design_capacity) \ + _ATTR_R(_BAT, cycle_count) \ + _ATTR_R(_BAT, temperature) \ + _ATTR_R(_BAT, serial) \ + _ATTR_R(_BAT, manufacture_date) \ + _ATTR_R(_BAT, first_use_date) \ + _ATTR_R(_BAT, dump) + +/* Define several macros we will feed into FOREACH_BAT_ATTR: */ + +#define DEFINE_BAT_ATTR_RW(_BAT,_NAME) \ + static struct bat_device_attribute dev_attr_##_NAME##_##_BAT = { \ + .dev_attr = __ATTR(_NAME, 0644, show_battery_##_NAME, \ + store_battery_##_NAME), \ + .bat = _BAT \ + }; + +#define DEFINE_BAT_ATTR_R(_BAT,_NAME) \ + static struct bat_device_attribute dev_attr_##_NAME##_##_BAT = { \ + .dev_attr = __ATTR(_NAME, 0644, show_battery_##_NAME, 0), \ + .bat = _BAT \ + }; + +#define REF_BAT_ATTR(_BAT,_NAME) \ + &dev_attr_##_NAME##_##_BAT.dev_attr.attr, + +/* This provide all attributes for one battery: */ + +#define PROVIDE_BAT_ATTRS(_BAT) \ + FOREACH_BAT_ATTR(_BAT, DEFINE_BAT_ATTR_RW, DEFINE_BAT_ATTR_R) \ + static struct attribute *tp_bat##_BAT##_attributes[] = { \ + FOREACH_BAT_ATTR(_BAT, REF_BAT_ATTR, REF_BAT_ATTR) \ + NULL \ + }; \ + static struct attribute_group tp_bat##_BAT##_attribute_group = { \ + .name = "BAT" #_BAT, \ + .attrs = tp_bat##_BAT##_attributes \ + }; + +/* Finally genereate the attributes: */ + +PROVIDE_BAT_ATTRS(0) +PROVIDE_BAT_ATTRS(1) + +/* List of attribute groups */ + +static struct attribute_group *attr_groups[] = { + &tp_root_attribute_group, + &tp_bat0_attribute_group, + &tp_bat1_attribute_group, + NULL +}; + + +/********************************************************************* + * Init and cleanup + */ + +static struct attribute_group **next_attr_group; /* next to register */ + +static int __init tp_init(void) +{ + int ret; + printk(KERN_INFO "tp_smapi " TP_VERSION " loading...\n"); + + ret = find_smapi_port(); + if (ret < 0) + goto err; + else + smapi_port = ret; + + if (!request_region(smapi_port, 1, "smapi")) { + printk(KERN_ERR "tp_smapi cannot claim port 0x%x\n", + smapi_port); + ret = -ENXIO; + goto err; + } + + if (!request_region(SMAPI_PORT2, 1, "smapi")) { + printk(KERN_ERR "tp_smapi cannot claim port 0x%x\n", + SMAPI_PORT2); + ret = -ENXIO; + goto err_port1; + } + + ret = platform_driver_register(&tp_driver); + if (ret) + goto err_port2; + + pdev = platform_device_alloc("smapi", -1); + if (!pdev) { + ret = -ENOMEM; + goto err_driver; + } + + ret = platform_device_add(pdev); + if (ret) + goto err_device_free; + + for (next_attr_group = attr_groups; *next_attr_group; + ++next_attr_group) { + ret = sysfs_create_group(&pdev->dev.kobj, *next_attr_group); + if (ret) + goto err_attr; + } + + printk(KERN_INFO "tp_smapi successfully loaded (smapi_port=0x%x).\n", + smapi_port); + return 0; + +err_attr: + while (--next_attr_group >= attr_groups) + sysfs_remove_group(&pdev->dev.kobj, *next_attr_group); + platform_device_unregister(pdev); +err_device_free: + platform_device_put(pdev); +err_driver: + platform_driver_unregister(&tp_driver); +err_port2: + release_region(SMAPI_PORT2, 1); +err_port1: + release_region(smapi_port, 1); +err: + printk(KERN_ERR "tp_smapi init failed (ret=%d)!\n", ret); + return ret; +} + +static void __exit tp_exit(void) +{ + while (next_attr_group && --next_attr_group >= attr_groups) + sysfs_remove_group(&pdev->dev.kobj, *next_attr_group); + platform_device_unregister(pdev); + platform_driver_unregister(&tp_driver); + release_region(SMAPI_PORT2, 1); + if (smapi_port) + release_region(smapi_port, 1); + + printk(KERN_INFO "tp_smapi unloaded.\n"); +} + +module_init(tp_init); +module_exit(tp_exit); diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index 4136633..ea75f8f 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -1613,4 +1613,6 @@ source "drivers/scsi/device_handler/Kconfig" source "drivers/scsi/osd/Kconfig" +source "drivers/scsi/vhba/Kconfig" + endmenu diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile index 1639bf8..9ec0dfa 100644 --- a/drivers/scsi/Makefile +++ b/drivers/scsi/Makefile @@ -155,6 +155,7 @@ obj-$(CONFIG_SCSI_ENCLOSURE) += ses.o obj-$(CONFIG_SCSI_OSD_INITIATOR) += osd/ obj-$(CONFIG_SCSI_HISI_SAS) += hisi_sas/ +obj-$(CONFIG_VHBA) += vhba/ # This goes last, so that "real" scsi devices probe earlier obj-$(CONFIG_SCSI_DEBUG) += scsi_debug.o diff --git b/drivers/scsi/vhba/Kconfig b/drivers/scsi/vhba/Kconfig new file mode 100644 index 0000000..7ccb7d8 --- /dev/null +++ b/drivers/scsi/vhba/Kconfig @@ -0,0 +1,9 @@ +config VHBA + tristate "Virtual (SCSI) Host Bus Adapter" + depends on SCSI + ---help--- + This is the in-kernel part of CDEmu, a CD/DVD-ROM device + emulator. + + This driver can also be built as a module. If so, the module + will be called vhba. diff --git b/drivers/scsi/vhba/Makefile b/drivers/scsi/vhba/Makefile new file mode 100644 index 0000000..a2a3f9d --- /dev/null +++ b/drivers/scsi/vhba/Makefile @@ -0,0 +1,4 @@ +VHBA_VERSION := 20170610 + +obj-$(CONFIG_VHBA) += vhba.o +ccflags-y := -DVHBA_VERSION=\"$(VHBA_VERSION)\" -Werror diff --git b/drivers/scsi/vhba/vhba.c b/drivers/scsi/vhba/vhba.c new file mode 100644 index 0000000..ff30e4c --- /dev/null +++ b/drivers/scsi/vhba/vhba.c @@ -0,0 +1,1076 @@ +/* + * vhba.c + * + * Copyright (C) 2007-2012 Chia-I Wu + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License along + * with this program; if not, write to the Free Software Foundation, Inc., + * 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. + */ + +#include + +#include +#include +#include +#include +#if LINUX_VERSION_CODE >= KERNEL_VERSION(4, 11, 0) +#include +#else +#include +#endif +#include +#include +#include +#include +#ifdef CONFIG_COMPAT +#include +#endif +#include +#include +#include +#include +#include + +/* scatterlist.page_link and sg_page() were introduced in 2.6.24 */ +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 24) +#define USE_SG_PAGE +#include +#endif + +MODULE_AUTHOR("Chia-I Wu"); +MODULE_VERSION(VHBA_VERSION); +MODULE_DESCRIPTION("Virtual SCSI HBA"); +MODULE_LICENSE("GPL"); + +#ifdef DEBUG +#define DPRINTK(fmt, args...) printk(KERN_DEBUG "%s: " fmt, __FUNCTION__, ## args) +#else +#define DPRINTK(fmt, args...) +#endif + +/* scmd_dbg was introduced in 3.15 */ +#ifndef scmd_dbg +#define scmd_dbg(scmd, fmt, a...) \ + dev_dbg(&(scmd)->device->sdev_gendev, fmt, ##a) +#endif + +#ifndef scmd_warn +#define scmd_warn(scmd, fmt, a...) \ + dev_warn(&(scmd)->device->sdev_gendev, fmt, ##a) +#endif + +#define VHBA_MAX_SECTORS_PER_IO 256 +#define VHBA_MAX_ID 32 +#define VHBA_CAN_QUEUE 32 +#define VHBA_INVALID_ID VHBA_MAX_ID + +#define DATA_TO_DEVICE(dir) ((dir) == DMA_TO_DEVICE || (dir) == DMA_BIDIRECTIONAL) +#define DATA_FROM_DEVICE(dir) ((dir) == DMA_FROM_DEVICE || (dir) == DMA_BIDIRECTIONAL) + + +/* SCSI macros were introduced in 2.6.23 */ +#if LINUX_VERSION_CODE < KERNEL_VERSION(2, 6, 23) +#define scsi_sg_count(cmd) ((cmd)->use_sg) +#define scsi_sglist(cmd) ((cmd)->request_buffer) +#define scsi_bufflen(cmd) ((cmd)->request_bufflen) +#define scsi_set_resid(cmd, to_read) {(cmd)->resid = (to_read);} +#endif + +/* 1-argument form of k[un]map_atomic was introduced in 2.6.37-rc1; + 2-argument form was deprecated in 3.4-rc1 */ +#if LINUX_VERSION_CODE >= KERNEL_VERSION(2, 6, 37) +#define vhba_kmap_atomic kmap_atomic +#define vhba_kunmap_atomic kunmap_atomic +#else +#define vhba_kmap_atomic(page) kmap_atomic(page, KM_USER0) +#define vhba_kunmap_atomic(page) kunmap_atomic(page, KM_USER0) +#endif + + +enum vhba_req_state { + VHBA_REQ_FREE, + VHBA_REQ_PENDING, + VHBA_REQ_READING, + VHBA_REQ_SENT, + VHBA_REQ_WRITING, +}; + +struct vhba_command { + struct scsi_cmnd *cmd; + int status; + struct list_head entry; +}; + +struct vhba_device { + uint id; + spinlock_t cmd_lock; + struct list_head cmd_list; + wait_queue_head_t cmd_wq; + atomic_t refcnt; +}; + +struct vhba_host { + struct Scsi_Host *shost; + spinlock_t cmd_lock; + int cmd_next; + struct vhba_command commands[VHBA_CAN_QUEUE]; + spinlock_t dev_lock; + struct vhba_device *devices[VHBA_MAX_ID]; + int num_devices; + DECLARE_BITMAP(chgmap, VHBA_MAX_ID); + int chgtype[VHBA_MAX_ID]; + struct work_struct scan_devices; +}; + +#define MAX_COMMAND_SIZE 16 + +struct vhba_request { + __u32 tag; + __u32 lun; + __u8 cdb[MAX_COMMAND_SIZE]; + __u8 cdb_len; + __u32 data_len; +}; + +struct vhba_response { + __u32 tag; + __u32 status; + __u32 data_len; +}; + +static struct vhba_command *vhba_alloc_command (void); +static void vhba_free_command (struct vhba_command *vcmd); + +static struct platform_device vhba_platform_device; + +static struct vhba_device *vhba_device_alloc (void) +{ + struct vhba_device *vdev; + + vdev = kzalloc(sizeof(struct vhba_device), GFP_KERNEL); + if (!vdev) { + return NULL; + } + + vdev->id = VHBA_INVALID_ID; + spin_lock_init(&vdev->cmd_lock); + INIT_LIST_HEAD(&vdev->cmd_list); + init_waitqueue_head(&vdev->cmd_wq); + atomic_set(&vdev->refcnt, 1); + + return vdev; +} + +static void vhba_device_put (struct vhba_device *vdev) +{ + if (atomic_dec_and_test(&vdev->refcnt)) { + kfree(vdev); + } +} + +static struct vhba_device *vhba_device_get (struct vhba_device *vdev) +{ + atomic_inc(&vdev->refcnt); + + return vdev; +} + +static int vhba_device_queue (struct vhba_device *vdev, struct scsi_cmnd *cmd) +{ + struct vhba_command *vcmd; + unsigned long flags; + + vcmd = vhba_alloc_command(); + if (!vcmd) { + return SCSI_MLQUEUE_HOST_BUSY; + } + + vcmd->cmd = cmd; + + spin_lock_irqsave(&vdev->cmd_lock, flags); + list_add_tail(&vcmd->entry, &vdev->cmd_list); + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + + wake_up_interruptible(&vdev->cmd_wq); + + return 0; +} + +static int vhba_device_dequeue (struct vhba_device *vdev, struct scsi_cmnd *cmd) +{ + struct vhba_command *vcmd; + int retval; + unsigned long flags; + + spin_lock_irqsave(&vdev->cmd_lock, flags); + list_for_each_entry(vcmd, &vdev->cmd_list, entry) { + if (vcmd->cmd == cmd) { + list_del_init(&vcmd->entry); + break; + } + } + + /* command not found */ + if (&vcmd->entry == &vdev->cmd_list) { + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + return SUCCESS; + } + + while (vcmd->status == VHBA_REQ_READING || vcmd->status == VHBA_REQ_WRITING) { + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + scmd_dbg(cmd, "wait for I/O before aborting\n"); + schedule_timeout(1); + spin_lock_irqsave(&vdev->cmd_lock, flags); + } + + retval = (vcmd->status == VHBA_REQ_SENT) ? FAILED : SUCCESS; + + vhba_free_command(vcmd); + + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + + return retval; +} + +static inline void vhba_scan_devices_add (struct vhba_host *vhost, int id) +{ + struct scsi_device *sdev; + + sdev = scsi_device_lookup(vhost->shost, 0, id, 0); + if (!sdev) { + scsi_add_device(vhost->shost, 0, id, 0); + } else { + dev_warn(&vhost->shost->shost_gendev, "tried to add an already-existing device 0:%d:0!\n", id); + scsi_device_put(sdev); + } +} + +static inline void vhba_scan_devices_remove (struct vhba_host *vhost, int id) +{ + struct scsi_device *sdev; + + sdev = scsi_device_lookup(vhost->shost, 0, id, 0); + if (sdev) { + scsi_remove_device(sdev); + scsi_device_put(sdev); + } else { + dev_warn(&vhost->shost->shost_gendev, "tried to remove non-existing device 0:%d:0!\n", id); + } +} + +static void vhba_scan_devices (struct work_struct *work) +{ + struct vhba_host *vhost = container_of(work, struct vhba_host, scan_devices); + unsigned long flags; + int id, change, exists; + + while (1) { + spin_lock_irqsave(&vhost->dev_lock, flags); + + id = find_first_bit(vhost->chgmap, vhost->shost->max_id); + if (id >= vhost->shost->max_id) { + spin_unlock_irqrestore(&vhost->dev_lock, flags); + break; + } + change = vhost->chgtype[id]; + exists = vhost->devices[id] != NULL; + + vhost->chgtype[id] = 0; + clear_bit(id, vhost->chgmap); + + spin_unlock_irqrestore(&vhost->dev_lock, flags); + + if (change < 0) { + dev_dbg(&vhost->shost->shost_gendev, "trying to remove target 0:%d:0\n", id); + vhba_scan_devices_remove(vhost, id); + } else if (change > 0) { + dev_dbg(&vhost->shost->shost_gendev, "trying to add target 0:%d:0\n", id); + vhba_scan_devices_add(vhost, id); + } else { + /* quick sequence of add/remove or remove/add; we determine + which one it was by checking if device structure exists */ + if (exists) { + /* remove followed by add: remove and (re)add */ + dev_dbg(&vhost->shost->shost_gendev, "trying to (re)add target 0:%d:0\n", id); + vhba_scan_devices_remove(vhost, id); + vhba_scan_devices_add(vhost, id); + } else { + /* add followed by remove: no-op */ + dev_dbg(&vhost->shost->shost_gendev, "no-op for target 0:%d:0\n", id); + } + } + } +} + +static int vhba_add_device (struct vhba_device *vdev) +{ + struct vhba_host *vhost; + int i; + unsigned long flags; + + vhost = platform_get_drvdata(&vhba_platform_device); + + vhba_device_get(vdev); + + spin_lock_irqsave(&vhost->dev_lock, flags); + if (vhost->num_devices >= vhost->shost->max_id) { + spin_unlock_irqrestore(&vhost->dev_lock, flags); + vhba_device_put(vdev); + return -EBUSY; + } + + for (i = 0; i < vhost->shost->max_id; i++) { + if (vhost->devices[i] == NULL) { + vdev->id = i; + vhost->devices[i] = vdev; + vhost->num_devices++; + set_bit(vdev->id, vhost->chgmap); + vhost->chgtype[vdev->id]++; + break; + } + } + spin_unlock_irqrestore(&vhost->dev_lock, flags); + + schedule_work(&vhost->scan_devices); + + return 0; +} + +static int vhba_remove_device (struct vhba_device *vdev) +{ + struct vhba_host *vhost; + unsigned long flags; + + vhost = platform_get_drvdata(&vhba_platform_device); + + spin_lock_irqsave(&vhost->dev_lock, flags); + set_bit(vdev->id, vhost->chgmap); + vhost->chgtype[vdev->id]--; + vhost->devices[vdev->id] = NULL; + vhost->num_devices--; + vdev->id = VHBA_INVALID_ID; + spin_unlock_irqrestore(&vhost->dev_lock, flags); + + vhba_device_put(vdev); + + schedule_work(&vhost->scan_devices); + + return 0; +} + +static struct vhba_device *vhba_lookup_device (int id) +{ + struct vhba_host *vhost; + struct vhba_device *vdev = NULL; + unsigned long flags; + + vhost = platform_get_drvdata(&vhba_platform_device); + + if (likely(id < vhost->shost->max_id)) { + spin_lock_irqsave(&vhost->dev_lock, flags); + vdev = vhost->devices[id]; + if (vdev) { + vdev = vhba_device_get(vdev); + } + + spin_unlock_irqrestore(&vhost->dev_lock, flags); + } + + return vdev; +} + +static struct vhba_command *vhba_alloc_command (void) +{ + struct vhba_host *vhost; + struct vhba_command *vcmd; + unsigned long flags; + int i; + + vhost = platform_get_drvdata(&vhba_platform_device); + + spin_lock_irqsave(&vhost->cmd_lock, flags); + + vcmd = vhost->commands + vhost->cmd_next++; + if (vcmd->status != VHBA_REQ_FREE) { + for (i = 0; i < vhost->shost->can_queue; i++) { + vcmd = vhost->commands + i; + + if (vcmd->status == VHBA_REQ_FREE) { + vhost->cmd_next = i + 1; + break; + } + } + + if (i == vhost->shost->can_queue) { + vcmd = NULL; + } + } + + if (vcmd) { + vcmd->status = VHBA_REQ_PENDING; + } + + vhost->cmd_next %= vhost->shost->can_queue; + + spin_unlock_irqrestore(&vhost->cmd_lock, flags); + + return vcmd; +} + +static void vhba_free_command (struct vhba_command *vcmd) +{ + struct vhba_host *vhost; + unsigned long flags; + + vhost = platform_get_drvdata(&vhba_platform_device); + + spin_lock_irqsave(&vhost->cmd_lock, flags); + vcmd->status = VHBA_REQ_FREE; + spin_unlock_irqrestore(&vhost->cmd_lock, flags); +} + +static int vhba_queuecommand_lck (struct scsi_cmnd *cmd, void (*done)(struct scsi_cmnd *)) +{ + struct vhba_device *vdev; + int retval; + + scmd_dbg(cmd, "queue %lu\n", cmd->serial_number); + + vdev = vhba_lookup_device(cmd->device->id); + if (!vdev) { + scmd_dbg(cmd, "no such device\n"); + + cmd->result = DID_NO_CONNECT << 16; + done(cmd); + + return 0; + } + + cmd->scsi_done = done; + retval = vhba_device_queue(vdev, cmd); + + vhba_device_put(vdev); + + return retval; +} + +#ifdef DEF_SCSI_QCMD +DEF_SCSI_QCMD(vhba_queuecommand) +#else +#define vhba_queuecommand vhba_queuecommand_lck +#endif + +static int vhba_abort (struct scsi_cmnd *cmd) +{ + struct vhba_device *vdev; + int retval = SUCCESS; + + scmd_warn(cmd, "abort %lu\n", cmd->serial_number); + + vdev = vhba_lookup_device(cmd->device->id); + if (vdev) { + retval = vhba_device_dequeue(vdev, cmd); + vhba_device_put(vdev); + } else { + cmd->result = DID_NO_CONNECT << 16; + } + + return retval; +} + +static struct scsi_host_template vhba_template = { + .module = THIS_MODULE, + .name = "vhba", + .proc_name = "vhba", + .queuecommand = vhba_queuecommand, + .eh_abort_handler = vhba_abort, + .can_queue = VHBA_CAN_QUEUE, + .this_id = -1, + .cmd_per_lun = 1, + .max_sectors = VHBA_MAX_SECTORS_PER_IO, + .sg_tablesize = 256, +}; + +static ssize_t do_request (struct scsi_cmnd *cmd, char __user *buf, size_t buf_len) +{ + struct vhba_request vreq; + ssize_t ret; + + scmd_dbg(cmd, "request %lu, cdb 0x%x, bufflen %d, use_sg %d\n", + cmd->serial_number, cmd->cmnd[0], scsi_bufflen(cmd), scsi_sg_count(cmd)); + + ret = sizeof(vreq); + if (DATA_TO_DEVICE(cmd->sc_data_direction)) { + ret += scsi_bufflen(cmd); + } + + if (ret > buf_len) { + scmd_warn(cmd, "buffer too small (%zd < %zd) for a request\n", buf_len, ret); + return -EIO; + } + + vreq.tag = cmd->serial_number; + vreq.lun = cmd->device->lun; + memcpy(vreq.cdb, cmd->cmnd, MAX_COMMAND_SIZE); + vreq.cdb_len = cmd->cmd_len; + vreq.data_len = scsi_bufflen(cmd); + + if (copy_to_user(buf, &vreq, sizeof(vreq))) { + return -EFAULT; + } + + if (DATA_TO_DEVICE(cmd->sc_data_direction) && vreq.data_len) { + buf += sizeof(vreq); + + if (scsi_sg_count(cmd)) { + unsigned char buf_stack[64]; + unsigned char *kaddr, *uaddr, *kbuf; + struct scatterlist *sg = scsi_sglist(cmd); + int i; + + uaddr = (unsigned char *) buf; + + if (vreq.data_len > 64) { + kbuf = kmalloc(PAGE_SIZE, GFP_KERNEL); + } else { + kbuf = buf_stack; + } + + for (i = 0; i < scsi_sg_count(cmd); i++) { + size_t len = sg[i].length; + +#ifdef USE_SG_PAGE + kaddr = vhba_kmap_atomic(sg_page(&sg[i])); +#else + kaddr = vhba_kmap_atomic(sg[i].page); +#endif + memcpy(kbuf, kaddr + sg[i].offset, len); + vhba_kunmap_atomic(kaddr); + + if (copy_to_user(uaddr, kbuf, len)) { + if (kbuf != buf_stack) { + kfree(kbuf); + } + return -EFAULT; + } + uaddr += len; + } + + if (kbuf != buf_stack) { + kfree(kbuf); + } + } else { + if (copy_to_user(buf, scsi_sglist(cmd), vreq.data_len)) { + return -EFAULT; + } + } + } + + return ret; +} + +static ssize_t do_response (struct scsi_cmnd *cmd, const char __user *buf, size_t buf_len, struct vhba_response *res) +{ + ssize_t ret = 0; + + scmd_dbg(cmd, "response %lu, status %x, data len %d, use_sg %d\n", + cmd->serial_number, res->status, res->data_len, scsi_sg_count(cmd)); + + if (res->status) { + unsigned char sense_stack[SCSI_SENSE_BUFFERSIZE]; + + if (res->data_len > SCSI_SENSE_BUFFERSIZE) { + scmd_warn(cmd, "truncate sense (%d < %d)", SCSI_SENSE_BUFFERSIZE, res->data_len); + res->data_len = SCSI_SENSE_BUFFERSIZE; + } + + /* Copy via temporary buffer on stack in order to avoid problems + with PAX on grsecurity-enabled kernels */ + if (copy_from_user(sense_stack, buf, res->data_len)) { + return -EFAULT; + } + memcpy(cmd->sense_buffer, sense_stack, res->data_len); + + cmd->result = res->status; + + ret += res->data_len; + } else if (DATA_FROM_DEVICE(cmd->sc_data_direction) && scsi_bufflen(cmd)) { + size_t to_read; + + if (res->data_len > scsi_bufflen(cmd)) { + scmd_warn(cmd, "truncate data (%d < %d)\n", scsi_bufflen(cmd), res->data_len); + res->data_len = scsi_bufflen(cmd); + } + + to_read = res->data_len; + + if (scsi_sg_count(cmd)) { + unsigned char buf_stack[64]; + unsigned char *kaddr, *uaddr, *kbuf; + struct scatterlist *sg = scsi_sglist(cmd); + int i; + + uaddr = (unsigned char *)buf; + + if (res->data_len > 64) { + kbuf = kmalloc(PAGE_SIZE, GFP_KERNEL); + } else { + kbuf = buf_stack; + } + + for (i = 0; i < scsi_sg_count(cmd); i++) { + size_t len = (sg[i].length < to_read) ? sg[i].length : to_read; + + if (copy_from_user(kbuf, uaddr, len)) { + if (kbuf != buf_stack) { + kfree(kbuf); + } + return -EFAULT; + } + uaddr += len; + +#ifdef USE_SG_PAGE + kaddr = vhba_kmap_atomic(sg_page(&sg[i])); +#else + kaddr = vhba_kmap_atomic(sg[i].page); +#endif + memcpy(kaddr + sg[i].offset, kbuf, len); + vhba_kunmap_atomic(kaddr); + + to_read -= len; + if (to_read == 0) { + break; + } + } + + if (kbuf != buf_stack) { + kfree(kbuf); + } + } else { + if (copy_from_user(scsi_sglist(cmd), buf, res->data_len)) { + return -EFAULT; + } + + to_read -= res->data_len; + } + + scsi_set_resid(cmd, to_read); + + ret += res->data_len - to_read; + } + + return ret; +} + +static inline struct vhba_command *next_command (struct vhba_device *vdev) +{ + struct vhba_command *vcmd; + + list_for_each_entry(vcmd, &vdev->cmd_list, entry) { + if (vcmd->status == VHBA_REQ_PENDING) { + break; + } + } + + if (&vcmd->entry == &vdev->cmd_list) { + vcmd = NULL; + } + + return vcmd; +} + +static inline struct vhba_command *match_command (struct vhba_device *vdev, u32 tag) +{ + struct vhba_command *vcmd; + + list_for_each_entry(vcmd, &vdev->cmd_list, entry) { + if (vcmd->cmd->serial_number == tag) { + break; + } + } + + if (&vcmd->entry == &vdev->cmd_list) { + vcmd = NULL; + } + + return vcmd; +} + +static struct vhba_command *wait_command (struct vhba_device *vdev, unsigned long flags) +{ + struct vhba_command *vcmd; + DEFINE_WAIT(wait); + + while (!(vcmd = next_command(vdev))) { + if (signal_pending(current)) { + break; + } + + prepare_to_wait(&vdev->cmd_wq, &wait, TASK_INTERRUPTIBLE); + + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + + schedule(); + + spin_lock_irqsave(&vdev->cmd_lock, flags); + } + + finish_wait(&vdev->cmd_wq, &wait); + if (vcmd) { + vcmd->status = VHBA_REQ_READING; + } + + return vcmd; +} + +static ssize_t vhba_ctl_read (struct file *file, char __user *buf, size_t buf_len, loff_t *offset) +{ + struct vhba_device *vdev; + struct vhba_command *vcmd; + ssize_t ret; + unsigned long flags; + + vdev = file->private_data; + + /* Get next command */ + if (file->f_flags & O_NONBLOCK) { + /* Non-blocking variant */ + spin_lock_irqsave(&vdev->cmd_lock, flags); + vcmd = next_command(vdev); + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + + if (!vcmd) { + return -EWOULDBLOCK; + } + } else { + /* Blocking variant */ + spin_lock_irqsave(&vdev->cmd_lock, flags); + vcmd = wait_command(vdev, flags); + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + + if (!vcmd) { + return -ERESTARTSYS; + } + } + + ret = do_request(vcmd->cmd, buf, buf_len); + + spin_lock_irqsave(&vdev->cmd_lock, flags); + if (ret >= 0) { + vcmd->status = VHBA_REQ_SENT; + *offset += ret; + } else { + vcmd->status = VHBA_REQ_PENDING; + } + + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + + return ret; +} + +static ssize_t vhba_ctl_write (struct file *file, const char __user *buf, size_t buf_len, loff_t *offset) +{ + struct vhba_device *vdev; + struct vhba_command *vcmd; + struct vhba_response res; + ssize_t ret; + unsigned long flags; + + if (buf_len < sizeof(res)) { + return -EIO; + } + + if (copy_from_user(&res, buf, sizeof(res))) { + return -EFAULT; + } + + vdev = file->private_data; + + spin_lock_irqsave(&vdev->cmd_lock, flags); + vcmd = match_command(vdev, res.tag); + if (!vcmd || vcmd->status != VHBA_REQ_SENT) { + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + DPRINTK("not expecting response\n"); + return -EIO; + } + vcmd->status = VHBA_REQ_WRITING; + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + + ret = do_response(vcmd->cmd, buf + sizeof(res), buf_len - sizeof(res), &res); + + spin_lock_irqsave(&vdev->cmd_lock, flags); + if (ret >= 0) { + vcmd->cmd->scsi_done(vcmd->cmd); + ret += sizeof(res); + + /* don't compete with vhba_device_dequeue */ + if (!list_empty(&vcmd->entry)) { + list_del_init(&vcmd->entry); + vhba_free_command(vcmd); + } + } else { + vcmd->status = VHBA_REQ_SENT; + } + + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + + return ret; +} + +static long vhba_ctl_ioctl (struct file *file, unsigned int cmd, unsigned long arg) +{ + struct vhba_device *vdev = file->private_data; + struct vhba_host *vhost; + struct scsi_device *sdev; + + switch (cmd) { + case 0xBEEF001: { + vhost = platform_get_drvdata(&vhba_platform_device); + sdev = scsi_device_lookup(vhost->shost, 0, vdev->id, 0); + + if (sdev) { + int id[4] = { + sdev->host->host_no, + sdev->channel, + sdev->id, + sdev->lun + }; + + scsi_device_put(sdev); + + if (copy_to_user((void *)arg, id, sizeof(id))) { + return -EFAULT; + } + + return 0; + } else { + return -ENODEV; + } + } + } + + return -ENOTTY; +} + +#ifdef CONFIG_COMPAT +static long vhba_ctl_compat_ioctl (struct file *file, unsigned int cmd, unsigned long arg) +{ + unsigned long compat_arg = (unsigned long)compat_ptr(arg); + return vhba_ctl_ioctl(file, cmd, compat_arg); +} +#endif + +static unsigned int vhba_ctl_poll (struct file *file, poll_table *wait) +{ + struct vhba_device *vdev = file->private_data; + unsigned int mask = 0; + unsigned long flags; + + poll_wait(file, &vdev->cmd_wq, wait); + + spin_lock_irqsave(&vdev->cmd_lock, flags); + if (next_command(vdev)) { + mask |= POLLIN | POLLRDNORM; + } + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + + return mask; +} + +static int vhba_ctl_open (struct inode *inode, struct file *file) +{ + struct vhba_device *vdev; + int retval; + + DPRINTK("open\n"); + + /* check if vhba is probed */ + if (!platform_get_drvdata(&vhba_platform_device)) { + return -ENODEV; + } + + vdev = vhba_device_alloc(); + if (!vdev) { + return -ENOMEM; + } + + if (!(retval = vhba_add_device(vdev))) { + file->private_data = vdev; + } + + vhba_device_put(vdev); + + return retval; +} + +static int vhba_ctl_release (struct inode *inode, struct file *file) +{ + struct vhba_device *vdev; + struct vhba_command *vcmd; + unsigned long flags; + + DPRINTK("release\n"); + + vdev = file->private_data; + + vhba_device_get(vdev); + vhba_remove_device(vdev); + + spin_lock_irqsave(&vdev->cmd_lock, flags); + list_for_each_entry(vcmd, &vdev->cmd_list, entry) { + WARN_ON(vcmd->status == VHBA_REQ_READING || vcmd->status == VHBA_REQ_WRITING); + + scmd_warn(vcmd->cmd, "device released with command %lu\n", vcmd->cmd->serial_number); + vcmd->cmd->result = DID_NO_CONNECT << 16; + vcmd->cmd->scsi_done(vcmd->cmd); + + vhba_free_command(vcmd); + } + INIT_LIST_HEAD(&vdev->cmd_list); + spin_unlock_irqrestore(&vdev->cmd_lock, flags); + + vhba_device_put(vdev); + + return 0; +} + +static struct file_operations vhba_ctl_fops = { + .owner = THIS_MODULE, + .open = vhba_ctl_open, + .release = vhba_ctl_release, + .read = vhba_ctl_read, + .write = vhba_ctl_write, + .poll = vhba_ctl_poll, + .unlocked_ioctl = vhba_ctl_ioctl, +#ifdef CONFIG_COMPAT + .compat_ioctl = vhba_ctl_compat_ioctl, +#endif +}; + +static struct miscdevice vhba_miscdev = { + .minor = MISC_DYNAMIC_MINOR, + .name = "vhba_ctl", + .fops = &vhba_ctl_fops, +}; + +static int vhba_probe (struct platform_device *pdev) +{ + struct Scsi_Host *shost; + struct vhba_host *vhost; + int i; + + shost = scsi_host_alloc(&vhba_template, sizeof(struct vhba_host)); + if (!shost) { + return -ENOMEM; + } + + shost->max_id = VHBA_MAX_ID; + /* we don't support lun > 0 */ + shost->max_lun = 1; + shost->max_cmd_len = MAX_COMMAND_SIZE; + + vhost = (struct vhba_host *)shost->hostdata; + memset(vhost, 0, sizeof(*vhost)); + + vhost->shost = shost; + vhost->num_devices = 0; + spin_lock_init(&vhost->dev_lock); + spin_lock_init(&vhost->cmd_lock); + INIT_WORK(&vhost->scan_devices, vhba_scan_devices); + vhost->cmd_next = 0; + for (i = 0; i < vhost->shost->can_queue; i++) { + vhost->commands[i].status = VHBA_REQ_FREE; + } + + platform_set_drvdata(pdev, vhost); + + if (scsi_add_host(shost, &pdev->dev)) { + scsi_host_put(shost); + return -ENOMEM; + } + + return 0; +} + +static int vhba_remove (struct platform_device *pdev) +{ + struct vhba_host *vhost; + struct Scsi_Host *shost; + + vhost = platform_get_drvdata(pdev); + shost = vhost->shost; + + scsi_remove_host(shost); + scsi_host_put(shost); + + return 0; +} + +static void vhba_release (struct device * dev) +{ + return; +} + +static struct platform_device vhba_platform_device = { + .name = "vhba", + .id = -1, + .dev = { + .release = vhba_release, + }, +}; + +static struct platform_driver vhba_platform_driver = { + .driver = { + .owner = THIS_MODULE, + .name = "vhba", + }, + .probe = vhba_probe, + .remove = vhba_remove, +}; + +static int __init vhba_init (void) +{ + int ret; + + ret = platform_device_register(&vhba_platform_device); + if (ret < 0) { + return ret; + } + + ret = platform_driver_register(&vhba_platform_driver); + if (ret < 0) { + platform_device_unregister(&vhba_platform_device); + return ret; + } + + ret = misc_register(&vhba_miscdev); + if (ret < 0) { + platform_driver_unregister(&vhba_platform_driver); + platform_device_unregister(&vhba_platform_device); + return ret; + } + + return 0; +} + +static void __exit vhba_exit(void) +{ + misc_deregister(&vhba_miscdev); + platform_driver_unregister(&vhba_platform_driver); + platform_device_unregister(&vhba_platform_device); +} + +module_init(vhba_init); +module_exit(vhba_exit); + diff --git a/drivers/tty/Kconfig b/drivers/tty/Kconfig index cc2b4d9..79b8b54 100644 --- a/drivers/tty/Kconfig +++ b/drivers/tty/Kconfig @@ -75,6 +75,19 @@ config VT_CONSOLE_SLEEP def_bool y depends on VT_CONSOLE && PM_SLEEP +config NR_TTY_DEVICES + int "Maximum tty device number" + depends on VT + range 12 63 + default 63 + ---help--- + This option is used to change the number of tty devices in /dev. + The default value is 63. The lowest number you can set is 12, + 63 is also the upper limit so we don't overrun the serial + consoles. + + If unsure, say 63. + config HW_CONSOLE bool depends on VT && !UML diff --git a/drivers/video/logo/Kconfig b/drivers/video/logo/Kconfig index 0037104..00944ea 100644 --- a/drivers/video/logo/Kconfig +++ b/drivers/video/logo/Kconfig @@ -15,71 +15,138 @@ config FB_LOGO_EXTRA depends on FB=y default y if SPU_BASE +config LOGO_RANDOM + bool "Select random available logo" + default y + help + Enable this option to use any available logo randomly at bootup. + +comment "Available logos" + +config LOGO_PCK_CLUT224 + bool "224-color Zen Kernel/Meditating Tux logo" + default y + config LOGO_LINUX_MONO bool "Standard black and white Linux logo" - default y + default n config LOGO_LINUX_VGA16 bool "Standard 16-color Linux logo" - default y + default n config LOGO_LINUX_CLUT224 bool "Standard 224-color Linux logo" - default y + default n config LOGO_BLACKFIN_VGA16 bool "16-colour Blackfin Processor Linux logo" depends on BLACKFIN - default y + default n config LOGO_BLACKFIN_CLUT224 bool "224-colour Blackfin Processor Linux logo" depends on BLACKFIN - default y + default n + +config LOGO_OLDPCK_CLUT224 + bool "224-color Old Zen Kernel logo" + depends on LOGO + default n + +config LOGO_ARCH_CLUT224 + bool "224-color Arch Linux logo" + depends on LOGO + default n + +config LOGO_GENTOO_CLUT224 + bool "224-color Gentoo Linux logo" + depends on LOGO + default n + +config LOGO_EXHERBO_CLUT224 + bool "224-color Exherbo Linux logo" + depends on LOGO + default n + +config LOGO_SLACKWARE_CLUT224 + bool "224-color Slackware Linux logo" + depends on LOGO + default n + +config LOGO_DEBIAN_CLUT224 + bool "224-color Debian Linux logo" + depends on LOGO + default n + +config LOGO_FEDORASIMPLE_CLUT224 + bool "224-color Fedora Simple Linux logo" + depends on LOGO + default n + +config LOGO_FEDORAGLOSSY_CLUT224 + bool "224-color Fedora Glossy Linux logo" + depends on LOGO + default n + +config LOGO_TITS_CLUT224 + bool "224-color Tits logo" + depends on LOGO + default n + +config LOGO_BSD_CLUT224 + bool "224-color BSD Devil logo" + depends on LOGO + default n + +config LOGO_FBSD_CLUT224 + bool "224-color FreeBSD logo" + depends on LOGO + default n config LOGO_DEC_CLUT224 bool "224-color Digital Equipment Corporation Linux logo" depends on MACH_DECSTATION || ALPHA - default y + default n config LOGO_MAC_CLUT224 bool "224-color Macintosh Linux logo" depends on MAC - default y + default n config LOGO_PARISC_CLUT224 bool "224-color PA-RISC Linux logo" depends on PARISC - default y + default n config LOGO_SGI_CLUT224 bool "224-color SGI Linux logo" depends on SGI_IP22 || SGI_IP27 || SGI_IP32 - default y + default n config LOGO_SUN_CLUT224 bool "224-color Sun Linux logo" depends on SPARC - default y + default n config LOGO_SUPERH_MONO bool "Black and white SuperH Linux logo" depends on SUPERH - default y + default n config LOGO_SUPERH_VGA16 bool "16-color SuperH Linux logo" depends on SUPERH - default y + default n config LOGO_SUPERH_CLUT224 bool "224-color SuperH Linux logo" depends on SUPERH - default y + default n config LOGO_M32R_CLUT224 bool "224-color M32R Linux logo" depends on M32R - default y + default n endif # LOGO diff --git a/drivers/video/logo/Makefile b/drivers/video/logo/Makefile index 6194373..501b363 100644 --- a/drivers/video/logo/Makefile +++ b/drivers/video/logo/Makefile @@ -7,6 +7,18 @@ obj-$(CONFIG_LOGO_LINUX_VGA16) += logo_linux_vga16.o obj-$(CONFIG_LOGO_LINUX_CLUT224) += logo_linux_clut224.o obj-$(CONFIG_LOGO_BLACKFIN_CLUT224) += logo_blackfin_clut224.o obj-$(CONFIG_LOGO_BLACKFIN_VGA16) += logo_blackfin_vga16.o +obj-$(CONFIG_LOGO_PCK_CLUT224) += logo_zen_clut224.o +obj-$(CONFIG_LOGO_OLDPCK_CLUT224) += logo_oldzen_clut224.o +obj-$(CONFIG_LOGO_ARCH_CLUT224) += logo_arch_clut224.o +obj-$(CONFIG_LOGO_GENTOO_CLUT224) += logo_gentoo_clut224.o +obj-$(CONFIG_LOGO_EXHERBO_CLUT224) += logo_exherbo_clut224.o +obj-$(CONFIG_LOGO_SLACKWARE_CLUT224) += logo_slackware_clut224.o +obj-$(CONFIG_LOGO_DEBIAN_CLUT224) += logo_debian_clut224.o +obj-$(CONFIG_LOGO_FEDORASIMPLE_CLUT224) += logo_fedorasimple_clut224.o +obj-$(CONFIG_LOGO_FEDORAGLOSSY_CLUT224) += logo_fedoraglossy_clut224.o +obj-$(CONFIG_LOGO_TITS_CLUT224) += logo_tits_clut224.o +obj-$(CONFIG_LOGO_BSD_CLUT224) += logo_bsd_clut224.o +obj-$(CONFIG_LOGO_FBSD_CLUT224) += logo_fbsd_clut224.o obj-$(CONFIG_LOGO_DEC_CLUT224) += logo_dec_clut224.o obj-$(CONFIG_LOGO_MAC_CLUT224) += logo_mac_clut224.o obj-$(CONFIG_LOGO_PARISC_CLUT224) += logo_parisc_clut224.o diff --git a/drivers/video/logo/logo.c b/drivers/video/logo/logo.c index 4d50bfd..d3a1335 100644 --- a/drivers/video/logo/logo.c +++ b/drivers/video/logo/logo.c @@ -1,26 +1,127 @@ /* - * Linux logo to be displayed on boot - * - * Copyright (C) 1996 Larry Ewing (lewing@isc.tamu.edu) - * Copyright (C) 1996,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz) - * Copyright (C) 2001 Greg Banks - * Copyright (C) 2001 Jan-Benedict Glaw - * Copyright (C) 2003 Geert Uytterhoeven - */ +* Linux logo to be displayed on boot +* +* Copyright (C) 1996 Larry Ewing (lewing@isc.tamu.edu) +* Copyright (C) 1996,1998 Jakub Jelinek (jj@sunsite.mff.cuni.cz) +* Copyright (C) 2001 Greg Banks +* Copyright (C) 2001 Jan-Benedict Glaw +* Copyright (C) 2003 Geert Uytterhoeven +*/ #include #include #include +#ifdef CONFIG_LOGO_RANDOM +#include +#endif + #ifdef CONFIG_M68K #include #endif + static bool nologo; module_param(nologo, bool, 0); MODULE_PARM_DESC(nologo, "Disables startup logo"); +/* Monochromatic logos */ +static const struct linux_logo *logo_mono[] = { +#ifdef CONFIG_LOGO_LINUX_MONO + &logo_linux_mono, /* Generic Linux logo */ +#endif +#ifdef CONFIG_LOGO_SUPERH_MONO + &logo_superh_mono, /* SuperH Linux logo */ +#endif +}; + +/* 16-colour logos */ +static const struct linux_logo *logo_vga16[] = { +#ifdef CONFIG_LOGO_LINUX_VGA16 + &logo_linux_vga16, /* Generic Linux logo */ +#endif +#ifdef CONFIG_LOGO_BLACKFIN_VGA16 + &logo_blackfin_vga16, /* Blackfin processor logo */ +#endif +#ifdef CONFIG_LOGO_SUPERH_VGA16 + &logo_superh_vga16, /* SuperH Linux logo */ +#endif +}; + +/* 224-colour logos */ +static const struct linux_logo *logo_clut224[] = { +#ifdef CONFIG_LOGO_LINUX_CLUT224 + &logo_linux_clut224, /* Generic Linux logo */ +#endif +#ifdef CONFIG_LOGO_BLACKFIN_CLUT224 + &logo_blackfin_clut224, /* Blackfin Linux logo */ +#endif +#ifdef CONFIG_LOGO_DEC_CLUT224 + &logo_dec_clut224, /* DEC Linux logo on MIPS/MIPS64 or ALPHA */ +#endif +#ifdef CONFIG_LOGO_MAC_CLUT224 + &logo_mac_clut224, /* Macintosh Linux logo on m68k */ +#endif +#ifdef CONFIG_LOGO_PARISC_CLUT224 + &logo_parisc_clut224, /* PA-RISC Linux logo */ +#endif +#ifdef CONFIG_LOGO_SGI_CLUT224 + &logo_sgi_clut224, /* SGI Linux logo on MIPS/MIPS64 */ +#endif +#ifdef CONFIG_LOGO_SUN_CLUT224 + &logo_sun_clut224, /* Sun Linux logo */ +#endif +#ifdef CONFIG_LOGO_SUPERH_CLUT224 + &logo_superh_clut224, /* SuperH Linux logo */ +#endif +#ifdef CONFIG_LOGO_M32R_CLUT224 + &logo_m32r_clut224, /* M32R Linux logo */ +#endif +#ifdef CONFIG_LOGO_PCK_CLUT224 + &logo_zen_clut224, /* Zen-Kernel logo */ +#endif +#ifdef CONFIG_LOGO_OLDPCK_CLUT224 + &logo_oldzen_clut224, /* Old Zen-Kernel logo */ +#endif +#ifdef CONFIG_LOGO_ARCH_CLUT224 + &logo_arch_clut224, /* Arch Linux logo */ +#endif +#ifdef CONFIG_LOGO_GENTOO_CLUT224 + &logo_gentoo_clut224, /* Gentoo Linux logo */ +#endif +#ifdef CONFIG_LOGO_EXHERBO_CLUT224 + &logo_exherbo_clut224, /* Exherbo Linux logo */ +#endif +#ifdef CONFIG_LOGO_SLACKWARE_CLUT224 + &logo_slackware_clut224, /* Slackware Linux logo */ +#endif +#ifdef CONFIG_LOGO_DEBIAN_CLUT224 + &logo_debian_clut224, /* Debian Linux logo */ +#endif +#ifdef CONFIG_LOGO_FEDORASIMPLE_CLUT224 + &logo_fedorasimple_clut224, /* Fedora Simple logo */ +#endif +#ifdef CONFIG_LOGO_FEDORAGLOSSY_CLUT224 + &logo_fedoraglossy_clut224, /* Fedora Glossy logo */ +#endif +#ifdef CONFIG_LOGO_TITS_CLUT224 + &logo_tits_clut224, /* Tits logo */ +#endif +#ifdef CONFIG_LOGO_BSD_CLUT224 + &logo_bsd_clut224, /* BSD logo */ +#endif +#ifdef CONFIG_LOGO_FBSD_CLUT224 + &logo_fbsd_clut224, /* Free BSD logo */ +#endif +}; + +#ifdef CONFIG_LOGO_RANDOM +#define LOGO_INDEX(s) (get_random_int() % s) +#else +#define LOGO_INDEX(s) (s - 1) +#endif + /* * Logos are located in the initdata, and will be freed in kernel_init. * Use late_init to mark the logos as freed to prevent any further use. @@ -43,75 +144,30 @@ late_initcall_sync(fb_logo_late_init); const struct linux_logo * __ref fb_find_logo(int depth) { const struct linux_logo *logo = NULL; + const struct linux_logo **array = NULL; + unsigned int size; if (nologo || logos_freed) return NULL; + /* Select logo array */ if (depth >= 1) { -#ifdef CONFIG_LOGO_LINUX_MONO - /* Generic Linux logo */ - logo = &logo_linux_mono; -#endif -#ifdef CONFIG_LOGO_SUPERH_MONO - /* SuperH Linux logo */ - logo = &logo_superh_mono; -#endif + array = logo_mono; + size = ARRAY_SIZE(logo_mono); } - if (depth >= 4) { -#ifdef CONFIG_LOGO_LINUX_VGA16 - /* Generic Linux logo */ - logo = &logo_linux_vga16; -#endif -#ifdef CONFIG_LOGO_BLACKFIN_VGA16 - /* Blackfin processor logo */ - logo = &logo_blackfin_vga16; -#endif -#ifdef CONFIG_LOGO_SUPERH_VGA16 - /* SuperH Linux logo */ - logo = &logo_superh_vga16; -#endif + array = logo_vga16; + size = ARRAY_SIZE(logo_vga16); } - if (depth >= 8) { -#ifdef CONFIG_LOGO_LINUX_CLUT224 - /* Generic Linux logo */ - logo = &logo_linux_clut224; -#endif -#ifdef CONFIG_LOGO_BLACKFIN_CLUT224 - /* Blackfin Linux logo */ - logo = &logo_blackfin_clut224; -#endif -#ifdef CONFIG_LOGO_DEC_CLUT224 - /* DEC Linux logo on MIPS/MIPS64 or ALPHA */ - logo = &logo_dec_clut224; -#endif -#ifdef CONFIG_LOGO_MAC_CLUT224 - /* Macintosh Linux logo on m68k */ - if (MACH_IS_MAC) - logo = &logo_mac_clut224; -#endif -#ifdef CONFIG_LOGO_PARISC_CLUT224 - /* PA-RISC Linux logo */ - logo = &logo_parisc_clut224; -#endif -#ifdef CONFIG_LOGO_SGI_CLUT224 - /* SGI Linux logo on MIPS/MIPS64 */ - logo = &logo_sgi_clut224; -#endif -#ifdef CONFIG_LOGO_SUN_CLUT224 - /* Sun Linux logo */ - logo = &logo_sun_clut224; -#endif -#ifdef CONFIG_LOGO_SUPERH_CLUT224 - /* SuperH Linux logo */ - logo = &logo_superh_clut224; -#endif -#ifdef CONFIG_LOGO_M32R_CLUT224 - /* M32R Linux logo */ - logo = &logo_m32r_clut224; -#endif + array = logo_clut224; + size = ARRAY_SIZE(logo_clut224); } + + /* We've got some logos to display */ + if (array && size) + logo = array[LOGO_INDEX(size)]; + return logo; } EXPORT_SYMBOL_GPL(fb_find_logo); diff --git b/drivers/video/logo/logo_arch_clut224.ppm b/drivers/video/logo/logo_arch_clut224.ppm new file mode 100644 index 0000000..e4d8daa --- /dev/null +++ b/drivers/video/logo/logo_arch_clut224.ppm @@ -0,0 +1,43204 @@ +P3 +# CREATOR: GIMP PNM Filter Version 1.1 +120 120 +255 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +4 +7 +0 +4 +7 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +10 +110 +160 +33 +122 +166 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +7 +146 +208 +27 +151 +213 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +4 +45 +68 +13 +147 +209 +13 +147 +209 +17 +73 +101 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +24 +137 +199 +13 +147 +209 +13 +147 +209 +54 +155 +212 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +7 +10 +13 +147 +209 +13 +147 +209 +13 +147 +209 +48 +164 +219 +3 +23 +31 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +33 +133 +182 +13 +147 +209 +13 +147 +209 +13 +147 +209 +24 +150 +212 +40 +160 +215 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +65 +166 +216 +5 +11 +14 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +3 +76 +109 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +65 +166 +216 +54 +136 +181 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +42 +161 +216 +48 +164 +219 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +9 +18 +24 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +17 +148 +210 +48 +164 +219 +40 +90 +118 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +37 +132 +189 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +65 +166 +216 +48 +164 +219 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +4 +7 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +48 +164 +219 +48 +164 +219 +11 +35 +49 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +25 +87 +120 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +27 +151 +213 +62 +163 +214 +62 +163 +214 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +65 +166 +216 +48 +164 +219 +0 +4 +7 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +4 +7 +17 +148 +210 +17 +148 +210 +17 +148 +210 +17 +148 +210 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +13 +147 +209 +62 +163 +214 +48 +164 +219 +36 +86 +115 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +10 +110 +160 +21 +149 +211 +21 +149 +211 +17 +148 +210 +17 +148 +210 +17 +148 +210 +17 +148 +210 +17 +148 +210 +13 +147 +209 +13 +147 +209 +13 +147 +209 +49 +151 +208 +48 +164 +219 +65 +166 +216 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +17 +148 +210 +24 +150 +212 +21 +149 +211 +13 +147 +209 +13 +147 +209 +21 +149 +211 +17 +148 +210 +17 +148 +210 +17 +148 +210 +17 +148 +210 +17 +148 +210 +13 +147 +209 +65 +166 +216 +65 +166 +216 +11 +35 +49 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +74 +106 +31 +155 +211 +24 +150 +212 +24 +150 +212 +27 +151 +213 +17 +148 +210 +21 +149 +211 +13 +147 +209 +13 +147 +209 +49 +151 +208 +21 +149 +211 +17 +148 +210 +17 +148 +210 +48 +164 +219 +65 +166 +216 +62 +163 +214 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +24 +150 +212 +17 +148 +210 +31 +155 +211 +17 +148 +210 +17 +148 +210 +2 +145 +206 +31 +155 +211 +31 +155 +211 +27 +151 +213 +21 +149 +211 +49 +151 +208 +21 +149 +211 +21 +149 +211 +49 +151 +208 +65 +166 +216 +65 +166 +216 +5 +18 +28 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +2 +27 +39 +27 +151 +213 +27 +151 +213 +49 +151 +208 +24 +150 +212 +24 +150 +212 +31 +155 +211 +24 +150 +212 +21 +149 +211 +24 +150 +212 +27 +151 +213 +27 +151 +213 +2 +145 +206 +21 +149 +211 +21 +149 +211 +72 +171 +221 +62 +163 +214 +48 +154 +203 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +143 +204 +31 +155 +211 +31 +155 +211 +24 +150 +212 +31 +155 +211 +27 +151 +213 +24 +150 +212 +24 +150 +212 +31 +155 +211 +31 +155 +211 +21 +149 +211 +31 +155 +211 +31 +155 +211 +21 +149 +211 +27 +151 +213 +62 +163 +214 +67 +167 +217 +67 +167 +217 +0 +4 +7 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +4 +7 +31 +155 +211 +31 +155 +211 +31 +155 +211 +31 +155 +211 +24 +150 +212 +24 +150 +212 +24 +150 +212 +21 +149 +211 +31 +155 +211 +29 +152 +214 +21 +149 +211 +17 +148 +210 +31 +155 +211 +17 +148 +210 +24 +150 +212 +49 +151 +208 +67 +167 +217 +67 +167 +217 +49 +132 +177 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +29 +108 +153 +51 +153 +210 +51 +153 +210 +31 +155 +211 +31 +155 +211 +24 +150 +212 +24 +150 +212 +31 +155 +211 +31 +155 +211 +24 +150 +212 +24 +150 +212 +27 +151 +213 +27 +151 +213 +49 +151 +208 +21 +149 +211 +21 +149 +211 +27 +151 +213 +67 +167 +217 +67 +167 +217 +67 +167 +217 +0 +4 +7 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +49 +151 +208 +31 +155 +211 +51 +153 +210 +29 +152 +214 +32 +153 +215 +32 +153 +215 +29 +152 +214 +31 +155 +211 +24 +150 +212 +24 +150 +212 +24 +150 +212 +31 +155 +211 +24 +150 +212 +31 +155 +211 +31 +155 +211 +27 +151 +213 +24 +150 +212 +62 +163 +214 +68 +168 +218 +68 +168 +218 +40 +90 +118 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +71 +103 +33 +156 +212 +31 +155 +211 +31 +155 +211 +31 +155 +211 +49 +151 +208 +29 +152 +214 +31 +155 +211 +31 +155 +211 +32 +153 +215 +24 +150 +212 +24 +150 +212 +31 +155 +211 +31 +155 +211 +24 +150 +212 +31 +155 +211 +31 +155 +211 +24 +150 +212 +29 +152 +214 +78 +167 +212 +68 +168 +218 +65 +166 +216 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +31 +155 +211 +31 +155 +211 +33 +156 +212 +32 +153 +215 +32 +153 +215 +51 +153 +210 +51 +153 +210 +31 +155 +211 +49 +151 +208 +29 +152 +214 +29 +152 +214 +31 +155 +211 +32 +153 +215 +31 +155 +211 +31 +155 +211 +24 +150 +212 +24 +150 +212 +24 +150 +212 +31 +155 +211 +68 +168 +218 +69 +169 +219 +71 +170 +220 +16 +56 +73 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +11 +35 +49 +33 +156 +212 +33 +156 +212 +53 +154 +211 +53 +154 +211 +31 +155 +211 +31 +155 +211 +31 +155 +211 +51 +153 +210 +31 +155 +211 +31 +155 +211 +31 +155 +211 +51 +153 +210 +51 +153 +210 +29 +152 +214 +51 +153 +210 +31 +155 +211 +31 +155 +211 +24 +150 +212 +31 +155 +211 +31 +155 +211 +69 +169 +219 +69 +169 +219 +65 +166 +216 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +49 +151 +208 +32 +153 +215 +33 +156 +212 +32 +153 +215 +54 +155 +212 +31 +155 +211 +32 +153 +215 +32 +153 +215 +33 +156 +212 +31 +155 +211 +32 +153 +215 +29 +152 +214 +51 +153 +210 +32 +153 +215 +31 +155 +211 +31 +155 +211 +29 +152 +214 +31 +155 +211 +51 +153 +210 +31 +155 +211 +31 +155 +211 +68 +168 +218 +82 +170 +215 +69 +169 +219 +12 +30 +39 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +18 +23 +54 +155 +212 +54 +155 +212 +33 +156 +212 +33 +156 +212 +33 +156 +212 +33 +156 +212 +38 +159 +214 +53 +154 +211 +53 +154 +211 +33 +156 +212 +31 +155 +211 +31 +155 +211 +33 +156 +212 +32 +153 +215 +32 +153 +215 +31 +155 +211 +31 +155 +211 +51 +153 +210 +31 +155 +211 +31 +155 +211 +31 +155 +211 +44 +162 +217 +82 +170 +215 +71 +170 +220 +62 +163 +214 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +41 +147 +203 +42 +161 +216 +38 +159 +214 +54 +155 +212 +38 +159 +214 +38 +159 +214 +32 +153 +215 +33 +156 +212 +32 +153 +215 +32 +153 +215 +54 +155 +212 +54 +155 +212 +32 +153 +215 +31 +155 +211 +31 +155 +211 +32 +153 +215 +32 +153 +215 +33 +156 +212 +31 +155 +211 +49 +151 +208 +31 +155 +211 +49 +151 +208 +32 +153 +215 +72 +171 +221 +71 +170 +220 +71 +170 +220 +8 +27 +35 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +2 +9 +12 +42 +161 +216 +42 +161 +216 +54 +155 +212 +54 +155 +212 +42 +161 +216 +54 +155 +212 +54 +155 +212 +38 +159 +214 +38 +159 +214 +38 +159 +214 +54 +155 +212 +38 +159 +214 +33 +156 +212 +33 +156 +212 +33 +156 +212 +53 +154 +211 +32 +153 +215 +32 +153 +215 +31 +155 +211 +33 +156 +212 +32 +153 +215 +32 +153 +215 +32 +153 +215 +62 +163 +214 +82 +170 +215 +72 +171 +221 +55 +159 +209 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +42 +139 +189 +44 +162 +217 +44 +162 +217 +54 +155 +212 +42 +161 +216 +42 +161 +216 +54 +155 +212 +54 +155 +212 +42 +161 +216 +38 +159 +214 +54 +155 +212 +54 +155 +212 +40 +160 +215 +54 +155 +212 +33 +156 +212 +38 +159 +214 +54 +155 +212 +54 +155 +212 +33 +156 +212 +33 +156 +212 +53 +154 +211 +33 +156 +212 +31 +155 +211 +31 +155 +211 +54 +155 +212 +72 +171 +221 +72 +171 +221 +71 +170 +220 +5 +11 +14 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +4 +7 +44 +162 +217 +55 +159 +209 +54 +155 +212 +54 +155 +212 +54 +155 +212 +54 +155 +212 +54 +155 +212 +42 +161 +216 +42 +161 +216 +42 +161 +216 +42 +161 +216 +42 +161 +216 +54 +155 +212 +38 +159 +214 +38 +159 +214 +38 +159 +214 +38 +159 +214 +38 +159 +214 +33 +156 +212 +54 +155 +212 +54 +155 +212 +33 +156 +212 +53 +154 +211 +32 +153 +215 +33 +156 +212 +68 +168 +218 +74 +172 +223 +72 +171 +221 +48 +154 +203 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +13 +51 +74 +40 +160 +215 +55 +159 +209 +42 +161 +216 +42 +161 +216 +54 +155 +212 +40 +160 +215 +62 +163 +214 +44 +162 +217 +42 +161 +216 +54 +155 +212 +54 +155 +212 +42 +161 +216 +54 +155 +212 +54 +155 +212 +54 +155 +212 +54 +155 +212 +54 +155 +212 +38 +159 +214 +54 +155 +212 +40 +160 +215 +32 +153 +215 +54 +155 +212 +33 +156 +212 +38 +159 +214 +33 +156 +212 +31 +155 +211 +83 +172 +217 +74 +172 +223 +82 +170 +215 +0 +7 +10 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +24 +70 +92 +48 +164 +219 +62 +163 +214 +54 +155 +212 +54 +155 +212 +55 +159 +209 +46 +163 +218 +54 +155 +212 +46 +163 +218 +55 +159 +209 +54 +155 +212 +54 +155 +212 +42 +161 +216 +42 +161 +216 +42 +161 +216 +42 +161 +216 +42 +161 +216 +54 +155 +212 +38 +159 +214 +54 +155 +212 +38 +159 +214 +40 +160 +215 +32 +153 +215 +54 +155 +212 +54 +155 +212 +33 +156 +212 +72 +171 +221 +74 +172 +223 +74 +172 +223 +53 +145 +195 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +23 +77 +105 +44 +162 +217 +48 +164 +219 +44 +162 +217 +54 +155 +212 +42 +161 +216 +46 +163 +218 +62 +163 +214 +40 +160 +215 +55 +159 +209 +44 +162 +217 +40 +160 +215 +33 +156 +212 +54 +155 +212 +54 +155 +212 +42 +161 +216 +54 +155 +212 +42 +161 +216 +42 +161 +216 +42 +161 +216 +54 +155 +212 +38 +159 +214 +38 +159 +214 +40 +160 +215 +40 +160 +215 +33 +156 +212 +74 +172 +223 +83 +172 +217 +74 +172 +223 +0 +7 +10 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +36 +86 +115 +48 +164 +219 +44 +162 +217 +44 +162 +217 +54 +155 +212 +62 +163 +214 +46 +163 +218 +40 +160 +215 +42 +161 +216 +44 +162 +217 +46 +163 +218 +55 +159 +209 +44 +162 +217 +33 +156 +212 +55 +159 +209 +42 +161 +216 +54 +155 +212 +42 +161 +216 +42 +161 +216 +54 +155 +212 +42 +161 +216 +42 +161 +216 +38 +159 +214 +54 +155 +212 +38 +159 +214 +74 +172 +223 +74 +172 +223 +86 +174 +219 +54 +136 +181 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +23 +88 +115 +5 +18 +28 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +24 +70 +92 +62 +163 +214 +54 +155 +212 +62 +163 +214 +44 +162 +217 +54 +155 +212 +48 +164 +219 +48 +164 +219 +54 +155 +212 +42 +161 +216 +48 +164 +219 +42 +161 +216 +54 +155 +212 +40 +160 +215 +54 +155 +212 +40 +160 +215 +40 +160 +215 +54 +155 +212 +42 +161 +216 +42 +161 +216 +42 +161 +216 +42 +161 +216 +42 +161 +216 +54 +155 +212 +42 +161 +216 +74 +172 +223 +86 +174 +219 +83 +172 +217 +0 +4 +7 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +62 +163 +214 +67 +167 +217 +36 +86 +115 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +16 +56 +73 +62 +163 +214 +46 +163 +218 +62 +163 +214 +44 +162 +217 +46 +163 +218 +44 +162 +217 +62 +163 +214 +44 +162 +217 +46 +163 +218 +54 +155 +212 +54 +155 +212 +42 +161 +216 +55 +159 +209 +54 +155 +212 +55 +159 +209 +54 +155 +212 +55 +159 +209 +54 +155 +212 +42 +161 +216 +54 +155 +212 +42 +161 +216 +54 +155 +212 +42 +161 +216 +74 +172 +223 +86 +174 +219 +86 +174 +219 +57 +132 +172 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +23 +88 +115 +62 +163 +214 +48 +164 +219 +48 +164 +219 +42 +151 +200 +0 +4 +7 +0 +2 +0 +0 +2 +0 +0 +2 +0 +18 +42 +55 +44 +162 +217 +62 +163 +214 +48 +164 +219 +62 +163 +214 +54 +155 +212 +44 +162 +217 +65 +166 +216 +46 +163 +218 +48 +164 +219 +44 +162 +217 +62 +163 +214 +42 +161 +216 +62 +163 +214 +46 +163 +218 +44 +162 +217 +54 +155 +212 +55 +159 +209 +54 +155 +212 +40 +160 +215 +40 +160 +215 +54 +155 +212 +42 +161 +216 +55 +159 +209 +86 +174 +219 +86 +174 +219 +85 +173 +218 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +62 +163 +214 +62 +163 +214 +62 +163 +214 +48 +164 +219 +60 +162 +212 +62 +163 +214 +16 +56 +73 +0 +2 +0 +0 +2 +0 +0 +2 +0 +7 +16 +23 +62 +163 +214 +44 +162 +217 +62 +163 +214 +46 +163 +218 +48 +164 +219 +62 +163 +214 +46 +163 +218 +62 +163 +214 +62 +163 +214 +44 +162 +217 +67 +167 +217 +44 +162 +217 +54 +155 +212 +54 +155 +212 +42 +161 +216 +48 +164 +219 +54 +155 +212 +54 +155 +212 +40 +160 +215 +42 +161 +216 +62 +163 +214 +40 +160 +215 +74 +172 +223 +90 +177 +222 +87 +175 +220 +28 +73 +96 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +19 +48 +67 +48 +164 +219 +65 +166 +216 +65 +166 +216 +65 +166 +216 +65 +166 +216 +48 +164 +219 +60 +162 +212 +62 +163 +214 +8 +27 +35 +0 +2 +0 +0 +2 +0 +0 +1 +4 +54 +155 +212 +67 +167 +217 +68 +168 +218 +65 +166 +216 +62 +163 +214 +62 +163 +214 +65 +166 +216 +62 +163 +214 +62 +163 +214 +44 +162 +217 +48 +164 +219 +54 +155 +212 +62 +163 +214 +48 +164 +219 +62 +163 +214 +54 +155 +212 +42 +161 +216 +54 +155 +212 +46 +163 +218 +42 +161 +216 +42 +161 +216 +54 +155 +212 +87 +175 +220 +87 +175 +220 +85 +173 +218 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +67 +167 +217 +65 +166 +216 +67 +167 +217 +65 +166 +216 +62 +163 +214 +65 +166 +216 +48 +164 +219 +65 +166 +216 +62 +163 +214 +65 +166 +216 +53 +145 +195 +0 +7 +10 +0 +2 +0 +0 +4 +7 +58 +151 +195 +54 +155 +212 +48 +164 +219 +69 +169 +219 +48 +164 +219 +46 +163 +218 +48 +164 +219 +46 +163 +218 +54 +155 +212 +62 +163 +214 +54 +155 +212 +44 +162 +217 +42 +161 +216 +62 +163 +214 +48 +164 +219 +44 +162 +217 +54 +155 +212 +44 +162 +217 +54 +155 +212 +38 +159 +214 +48 +164 +219 +83 +172 +217 +90 +177 +222 +90 +177 +222 +30 +93 +120 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +22 +45 +59 +65 +166 +216 +65 +166 +216 +65 +166 +216 +65 +166 +216 +65 +166 +216 +72 +171 +221 +65 +166 +216 +48 +164 +219 +65 +166 +216 +65 +166 +216 +65 +166 +216 +65 +166 +216 +33 +133 +182 +0 +4 +7 +0 +1 +4 +34 +104 +137 +62 +163 +214 +62 +163 +214 +62 +163 +214 +62 +163 +214 +67 +167 +217 +69 +169 +219 +62 +163 +214 +62 +163 +214 +48 +164 +219 +62 +163 +214 +62 +163 +214 +62 +163 +214 +48 +164 +219 +65 +166 +216 +42 +161 +216 +48 +164 +219 +48 +164 +219 +44 +162 +217 +55 +159 +209 +38 +159 +214 +88 +176 +221 +88 +176 +221 +85 +173 +218 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +65 +166 +216 +67 +167 +217 +67 +167 +217 +65 +166 +216 +65 +166 +216 +65 +166 +216 +65 +166 +216 +65 +166 +216 +65 +166 +216 +67 +167 +217 +62 +163 +214 +65 +166 +216 +62 +163 +214 +48 +164 +219 +68 +168 +218 +36 +115 +153 +0 +4 +7 +17 +35 +44 +62 +163 +214 +65 +166 +216 +62 +163 +214 +48 +164 +219 +48 +164 +219 +62 +163 +214 +67 +167 +217 +68 +168 +218 +62 +163 +214 +65 +166 +216 +62 +163 +214 +44 +162 +217 +65 +166 +216 +62 +163 +214 +62 +163 +214 +48 +164 +219 +62 +163 +214 +44 +162 +217 +54 +155 +212 +72 +171 +221 +88 +176 +221 +88 +176 +221 +40 +90 +118 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +0 +2 +0 +26 +49 +63 +68 +168