Brain Fuck Scheduler
The Brain Fuck Scheduler (BFS) is a process scheduler designed for the Linux kernel in August 2009 as an alternative to the Completely Fair Scheduler (CFS) and the O(1) scheduler. BFS was created by veteran kernel programmer Con Kolivas.
The objective of BFS, compared to other schedulers, is to provide a scheduler with a simpler algorithm, that does not require adjustment of heuristics or tuning parameters to tailor performance to a specific type of computation workload. Kolivas asserted that these tunable parameters were difficult for the average user to understand, especially in terms of interactions of multiple parameters with each other, and claimed that the use of such tuning parameters could often result in improved performance in a specific targeted type of computation, at the cost of worse performance in the general case. BFS has been reported to improve responsiveness on Linux desktop computers with fewer than 16 cores.
Shortly following its introduction, the new scheduler made headlines within the Linux community, appearing on Slashdot, with reviews in Linux Magazine and Linux Pro Magazine. Although there have been varied reviews of improved performance and responsiveness, Con Kolivas did not intend for BFS to be integrated into the mainline kernel.
Theoretical design and efficiency
BFS uses a doubly linked list data structure, but the data structure is treated like a queue. Task insertion is O(1).:ln 119-120 Task search for next task to execute is O(n) worst case.:ln 209 It uses a single global run queue which all CPUs use. Tasks with higher scheduling priorities get executed first.:ln 4146–4161 Tasks are ordered (or distributed) and chosen based on the virtual deadline formula in all policies except for the realtime and Isochronous priority classes.
The execution behavior is still a weighted variation of the Round-Robin Scheduler especially when tasks have the same priority below the Isochronous policy.:ln 1193-1195, 334-335 The user tuneable round robin interval (time slice) is 6 milliseconds by default which was chosen as the minimal jitter just below detectable by humans.:ln 306 Kolivas claimed that anything below the 6 ms was pointless and anything above 300 ms for the round robin timeslice is fruitless in terms of throughput.:ln 314-320 This important tuneable can tailor the round robin scheduler as a trade off between throughput and latency.:ln 229–320 All tasks get the same time slice with the exception of realtime FIFO which is assumed to have infinite time slice.:ln 1646, 1212–1218, 4062, 3910
Kolivas explained the reason why he choose to go with the doubly linked list mono-runqueue than the multi-runqueue (round robin:par. 3) priority array per CPU that was used in his RDSL scheduler was to put to ease fairness among the multiple CPU scenario and remove complexity that each runqueue in a multi-runqueue scenario had to maintain its own latencies and [task] fairness.:ln 81-92 He claimed that deterministic latencies was guaranteed with BFS in his later iteration of MuQSS.:ln 471–472 He also recognized possible lock contention problem (related to the altering, removal, creation of task node data):ln 126–144 with increasing CPUs and the overhead of O(log n) next task for execution lookup.:ln 472–478 MuQSS tried to resolve those problems.
The virtual deadline formula is a future deadline time that is the scaled round robin timeslice based on the nice level offset by the current time (in niffy units or nanosecond jiffies aka an internal kernel time counter).:ln 4023, 4063 The virtual deadline only suggests the order but doesn't guarantee that a task will run exactly on the future scheduled niffy.:ln 161-163
First a prio ratios lookup table is created.:ln 8042-8044 It is based on a recursive sequence. It increases 10% each nice level.:ln 161 It follows a parabolic pattern if graphed, and the niced tasks are distributed as a moving squared function from 0 to 39 (corresponding from highest to lowest nice priority) as the domain and 128 to 5089 as the range.:ln 177-179, 120, 184-185 The moving part comes from the variable in the virtual deadline formula that Kolivas hinted.
The task's nice-to-index mapping function is mapped from nice −20…19 to index 0…39 to be used as the input to the prio ratio lookup table. This mapping function is the TASK_USER_PRIO() macro in sched.h in the kernel header. The internal kernel implementation slightly differs with range between 100–140 static priority but users will see it as −20…19 nice.
where is the virtual deadline in u64 integer nanoseconds as a function of nice and which is the current time in niffies (as in nanosecond jiffies), is the prio ratio table lookup as a function of index, is the task's nice-to-index mapping function, is the round robin timeslice in milliseconds, is a constant of 1 millisecond in terms of nanoseconds as a latency reducing approximation of the conversion factor of but Kolivas uses a base 2 constant with approximately that scale.:ln 1173–1174 Smaller values of mean that the virtual deadline is earlier corresponding to negative nice values. Larger values of indicate the virtual deadline is pushed back later corresponding to positive nice values. It uses this formula whenever the timeslice expires.:ln 5087
128 in base 2 corresponds to 100 in base 10 and possibly a "pseudo 100.":ln 3415 115 in base 2 corresponds to 90 in base 10. Kolivas uses 128 for "fast shifts,":ln 3846, 1648, 3525 as in division is right shift base 2.
* The alternative formula is presented for ease of understanding. All math is done in integer math so precision loss would be great. It is possibly why Kolivas deferred the division by 128 to one of the largest numbers as a multiple of 128 resulting in no remainder.
|Nice||Virtual Deadline in timeslices relative to||Virtual Deadline in exact seconds relative to|
BFS uses scheduling policies to determine how much of the CPU tasks may use. BFS uses 4 scheduling tiers (called scheduling policies or scheduling classes) ordered from best to worst which determines how tasks are selected:ln 4146-4161 with the ones on top being executed first.
Each task has a special value called a prio. In the v0.462 edition (used in the -ck 4.0 kernel patchset), there are total of 103 "priority queues" (aka prio) or allowed values that it can take. No actual special data structure was used as the priority queue but only the doubly linked list runqueue itself. The lower prio value means it is more important and gets executed first.
The realtime policy was designed for realtime tasks. This policy implies that the running tasks cannot be interrupted (i.e. preempted) by the lower prio-ed task or lower priority policy tiers. Priority classes considered under the realtime policy by the scheduler are those marked SCHED_RR and SCHED_FIFO.:ln 351, 1150 The scheduler treats realtime round robin (SCHED_RR) and realtime FIFO (SCHED_FIFO) differently.:ln 3881-3934
On forks, the process priority will be demoted to normal policy.:ln 2708
The Isochronous policy was designed for near realtime performance for non-root users.:ln 325
The behavior of the policy can allow a task can be demoted to normal policy:ln 336 when it exceeds a tuneable resource handling percentage (70% by default:ln 343, 432) of 5 seconds scaled to the number of online CPUs and the timer resolution plus 1 tick.:ln 343, 3844–3859, 1167, 338:ln 1678, 4770–4783, 734 The formula was altered in MuQSS due to the multi-runqueue design. The exact formulas are:
where is the total number of isochronous ticks, is the timer frequency, is the number of online CPUs, is the tuneable resource handling percentage not in decimal but as a whole number. The timer frequency is set to 250 by default and editable in the kernel, but usually tuned to 100 Hz for servers and 1000 Hz for interactive desktops. 250 is the balanced value. Setting to 100 made tasks behave as realtime and 0 made it not pseudo-realtime and anything in the middle was pseudo-realtime.:ln 346–348
The task that had an earliest virtual deadline was chosen for execution, but when multiple Isochronous tasks are in existence, they schedule as round robin allowing tasks to run the tuneable round robin value (with 6 ms as the default) one after another in a fair equal chance without considering the nice level.:ln 334
The design laid out one priority queue and tasks are chosen to be executed first based on earliest virtual deadline.
Idle priority policy
The idle priority was designed for background processes such as distributed programs and transcoders so that foreground processes or those above this scheduling policy can run uninterrupted.:ln 363–368
Preemption can occur when a newly ready task with a higher priority policy (i.e. higher prio) has an earlier virtual deadline than the currently running task - which will be descheduled and put at the back of the queue.:ln 169–175 Descheduled means that its virtual deadline is updated.:ln 165–166 The task's time gets refilled to max round robin quantum when it has used up all its time.:ln 4057–4062, 5856 If the scheduler found the task at the higher prio with the earliest virtual deadline, it will execute in place of the less important currently running task only if all logical CPUs (including hyperthreaded cores / SMT threads) are busy. The scheduler will delay preemption as long as possible if there are unused logical CPUs.
If a task is marked idle priority policy, it cannot preempt at all even other idle policy marked tasks but rather use cooperative multitasking.:ln 2341–2344
Task placement, multiple cores
When the scheduler discovers a waking task, it will need to determine which logical CPU to run the wakening task on the non-unicore system. The scheduler favors most the idle hyperthreaded cores (or idle SMT threads) first on the same CPU that the task executed on,:ln 261 then the other idle core of a multicore CPU,:ln 262 then the other CPUs on the same NUMA node,:ln 267, 263–266, 255–258 then all busy hyperthreaded cores / SMT threads / logical CPUs to be preempted on the same NUMA node,:ln 265–267 then the other (remote) NUMA node:ln 268–270 and is ranked on a preference list.:ln 255–274 This special scan exists to minimize latency overhead resulting of migrating the task.:ln 245, 268–272
The preemption order is similar to the above paragraph. The preemption order is hyperthreaded core / SMT units on the same multicore first, then the other core in the multicore, then the other CPU on the same NUMA node.:ln 265-267 When it goes scanning for a task to preempt in the other remote NUMA node, the preemption is just any busy threads with lower to equal prio or later virtual deadline assuming that all logical CPUs (including hyperthreaded core / SMT threads) in the machine are all busy.:ln 270 The scheduler will have to scan for a suitable task with a lower or maybe equal priority policy task (with a later virtual deadline if necessary) to preempt and avoid logical CPUs with a task with a higher priority policy which it cannot preempt. Local preemption has a higher rank than scanning for a remote idle NUMA unit.:ln 265–269
When a task is involuntary preempted at the time the CPU is slowed down as a result of kernel mediated CPU frequency scaling (aka CPU frequency governor), the task is specially marked "sticky" except those marked as realtime policy.:ln 2085 Marked sticky indicates that the task still has unused time and the task is restricted executing to the same CPU.:ln 233–243 The task will be marked sticky whenever the CPU scaling governor has scaled the CPU at a slower speed.:ln 2082–2107, 8840–8848 The idled stickied task will return to either executing at full Ghz speed by chance or to be rescheduled to execute on the best idle CPU that is not the same CPU that the task ran on.:ln 2082–2086, 239–242, 2068–2079 It is not desirable to migrate the task to other places but make it idle instead because of increased latency brought about of overhead to migrating the task to another CPU or NUMA node.:ln 228, 245 This sticky feature was removed in the last iteration of BFS (v0.512) corresponding to Kolivas' patchset 4.8-ck1 and did not exist in MuQSS.
A privileged user can change the priority policy of a process with the schedtool program:ln 326, 373 or it is done by a program itself.:ln 336 The priority class can be manipulated at the code level with a syscall like sched_setscheduler only available to root, which schedtool uses.
In a contemporary study, the author compared the BFS to the CFS using the Linux kernel v3.6.2 and several performance-based endpoints. The purpose of this study was to evaluate the Completely Fair Scheduler (CFS) in the vanilla Linux kernel and the BFS in the corresponding kernel patched with the ck1 patchset. Seven different machines were used to see if differences exist and, to what degree they scale using performance based metrics. Number of logical CPUs ranged from 1 to 16. These end-points were never factors in the primary design goals of the BFS. The results were encouraging.
Kernels patched with the ck1 patch set including the BFS outperformed the vanilla kernel using the CFS at nearly all the performance-based benchmarks tested. Further study with a larger test set could be conducted, but based on the small test set of 7 PCs evaluated, these increases in process queuing, efficiency/speed are, on the whole, independent of CPU type (mono, dual, quad, hyperthreaded, etc.), CPU architecture (32-bit and 64-bit) and of CPU multiplicity (mono or dual socket).
Moreover, on several "modern" CPUs, such as the Intel Core 2 Duo and Core i7, that represent common workstations and laptops, BFS consistently outperformed the CFS in the vanilla kernel at all benchmarks. Efficiency and speed gains were small to moderate.
BFS is the default scheduler for the following desktop Linux distributions:
Additionally, BFS has been added to an experimental branch of Google's Android development repository. It was not included in the Froyo release after blind testing did not show an improved user experience.
Theoretical design and efficiency
MuQSS uses a bidirectional static arrayed 8 level skip list and tasks are ordered by static priority [queues] (referring to the scheduling policy) and a virtual deadline.:ln 519, 525, 537, 588, 608 8 was chosen to fit the array in the cacheline.:ln 523 Doubly linked data structure design was chosen to speed up task removal. Removing a task takes only O(1) with a doubly skip list versus the original design by William Pugh which takes O(n) worst case.:ln 458
Task insertion is O(log n).:ln 458 The next task for execution lookup is O(k), where k is the number of CPUs.:ln 589–590, 603, 5125 The next task for execution is O(1) per runqueue,:ln 5124 but the scheduler examines every other runqueues to maintain task fairness among CPUs, for latency or balancing (to maximize CPU usage and cache coherency on the same NUMA node over those that access across NUMA nodes), so ultimately O(k).:ln 591–593, 497–501, 633–656 The max number of tasks it can handle are 64k tasks per runqueue per CPU.:ln 521 It uses multiple task runqueues in some configurations one runqueue per CPU, whereas its predecessor BFS only used one task runqueue for all CPUs.
Tasks are ordered as a gradient in the skip list in a way that realtime policy priority comes first and idle policy priority comes last.:ln 2356-2358 Normal and idle priority policy still get sorted by virtual deadline which uses nice values.:ln 2353 Realtime and Isochronous policy tasks are run in FIFO order ignoring nice values.:ln 2350–2351 New tasks with same key are placed in FIFO order meaning that newer tasks get placed at the end of the list (i.e. top most node vertically), and tasks at 0th level or at the front-bottom get execution first before those at nearest to the top vertically and those furthest away from the head node.:ln 2351–2352, 590 The key used for inserted sorting is either the static priority:ln 2345, 2365, or the virtual deadline:ln 2363.
The user can choose to share runqueues among multicore or have a runqueue per logical CPU.:ln 947–1006 The speculation of sharing runqueues design was to reduce latency with a tradeoff of throughput.:ln 947–1006
A new behavior introduced by MuQSS was the use of the high resolution timer for below millisecond accuracy when timeslices were used up resulting in rescheduling tasks.:ln 618-630, 3829–3851, 3854-3865, 5316
- "-ck hacking: BFS version 0.512, linux-4.8-ck1, MuQSS for linux-4.8". ck-hack.blogspot.com. 2016-10-03. Retrieved 2016-11-10.
- "Con Kolivas Introduces New BFS Scheduler » Linux Magazine". Linuxpromagazine.com. 2009-09-02. Retrieved 2013-10-30.
- "FAQs about BFS v0.330". Ck.kolivas.org. Retrieved 2013-10-30.
- "CPU Schedulers Compared" (PDF). Repo-ck.com. Retrieved 2013-10-30.
- "Con Kolivas Returns, With a Desktop-Oriented Linux Scheduler". Slashdot. Retrieved 2013-10-30.
- "Ingo Molnar Tests New BF Scheduler". Linux Magazine. 2009-09-08. Retrieved 2013-10-30.
- "4.0-sched-bfs-462.patch". Con Kolivas. 2015-04-16. Retrieved 2019-01-29.
- "The Rotating Staircase Deadline Scheduler". corbet. 2007-03-06. Retrieved 2019-01-30.
- "sched-rsdl-0.26.patch". Con Kolivas. Retrieved 2019-01-30.
- "0001-MultiQueue-Skiplist-Scheduler-version-v0.173.patch". Con Kolivas. 2018-08-27. Retrieved 2019-01-29.
- "The Linux Scheduler". Moshe Bar. 2000-04-01. Retrieved 2019-01-29.
- "schedtool.c". freek. 2017-07-17. Retrieved 2019-01-30.
- "Sabayon 7 Brings Linux Heaven". Ostatic.com. Retrieved 2013-10-30.
- "2010 Edition is now available for download". PCLinuxOS. 2013-10-22. Retrieved 2013-10-30.
- "Zenwalk 6.4 is ready ! - Releases - News". Zenwalk.org. Archived from the original on 2013-10-23. Retrieved 2013-10-30.
- "About GalliumOS - GalliumOS Wiki". wiki.galliumos.org. Retrieved 2018-06-08.
- Archived September 22, 2009, at the Wayback Machine
- "CyanogenMod 5 for the G1/ADP1". Lwn.net. Retrieved 2013-10-30.
- "ck-hacking: linux-4.8-ck2, MuQSS version 0.114". ck-hack.blogspot.com. 2016-10-21. Retrieved 2016-11-10.