What is an MFQ planning algorithm

The kernel of the operating system is very interesting for thread planning because its series of ideas and methods can find shadows in life and business developments, e.g.

  • The supermarket cashier is divided into several cascading channels, and the fast passage for customers, the baskets, the slow passage for customers who have created their shopping vehicles, which has reduced the payment levels for the basket customers. Imagine that if a customer only bought a few items, but it's not awesome to uplift the customer who bought a car item, they may lose the patient.

  • The banking hall naming system, it will follow the VIP customers, ordinary customers share the serial number of the queue number of multiple sections, and the VIP customer service speed will be somewhat, and the response speed of ordinary customers will be slow.

  • In the content processing system, in case of a certain amount of processing, we will also perform the planning processing according to the various article sources that prioritize the processing of high priority articles.

  • Different priorities are set up in the fusion storage system for the host I / O requests to different services, prioritize the I / O performance (parallelism and delay) of important services.

Two interesting scenes

  • We all know that the process is the basic unit of the operating system for "resource allocation". The thread is the basic unit of "planning" in the operating system. From the literal interpretation of this sentence it seems that the process and "planning" don't have much relationship, but actually the kernel is at the end of the same process when the thread is planned (when other conditions are the same, e.g. priority, etc.). In business application development, when thread is related, the number of woker threads is generally set to the logical core of the CPU to maximize the use of CPUs. If there is no strategy mentioned above, this setting does not have much meaning (or set more threads, there will be a total number of planned business threads as it is not possible to have only one service in a server. Multiple threads

  • The operating system provides universal planning capabilities in some special scenarios, we know how our threads are mapped between multiple CPU cores as the operating system. B. in a database system, to ensure that a thread with high priority receives a fast process (e.g. the transaction process), avoid being interrupted by other threads (fragmentation time too) or migrated to other CPU cores (results in a CPU cache failure), transformed threads, and partial CPU core bindings, leaving only the remaining CPU core involvement in the universal planning task.

Operating system kernels absolute control of the CPU

In the case of the operating system kernel, the foremost problem should also mention the right question, that is, the operating system can get the planning right, and the CPU core does not last the long-term occupation of the application. In the article (Summary of the kernel and process learning system in the operating system knowledge) it is mentioned that the operating system kernel restores control of the CPU using the hardware switch and then decides to continue the execution of the current thread or to continue on to other threads (probably different processes) as a series of complex planning Rules of).

I also wrote a program here while opening 8 threads, each thread executing dead cycle calculation logic. The CPU model of the notebook is: 2 GHz quad-core Intel Core I7, 1 CPU, 4 cores, due to the opening of the Ultra-Thread technology, to run 8 threads at the same time. After the program is turned on, you can view the top command. The CPU usage of this program is kept around 700% for a long time. I can still write articles, search the web and have not been affected.

Universal planning algorithm.

Here the OS kernel scheduling object is indeed a thread (including Golang language, the developer is a sweeper, the OS kernel scheduling object is still a thread) but when explaining the scheduling algorithm when still using "threads , in some scenes there is some literal conflict (and the actual execution behavior of threads), and the scheduling algorithm has been uniform, so there is a level of abstraction called "task" to explain the scheduling algorithm .. In the case of the The planning algorithm needs resource throughput (throughput) 、 Average response time (delay, average response time) 、 Fairness 、 Additional overhead caused by schedules (overhead) Waiting for several aspects.

1. Advanced first outlet algorithm (FIFO, first-in-first-out)

According to the order of the task, call, call it again, do a task again and run the next task when the task is exceeded.

Advantage:

  • Minimum task switching of the effort (since there is no switching during the task execution, the task switch overhead is 0)

  • Maximum throughput (since there is no task switch overhead, in other cases the throughput is definitely the greatest)

  • The simplest and fairness (let's do it first)

Disadvantage:

  • High Average Response Time: If it only takes 10 milliseconds of the task, after the consumption task it takes 1010 milliseconds, it takes 1010 milliseconds. Most of the time it is waited to wait for it.

Applicable scene: The time consumption of the task in the queue is almost one scene.

2, shortest time task priority algorithm (SJF)

According to the time-consuming length of the task, this algorithm, the time-consuming task of the priority schedule is a premise that the time consumption of each task is required in advance, which is not realistic in the actual situation. In addition, this time is related to the execution time, e.g. B. an exemplary one hour task and the time consuming is complete. At this time, when a task of 1-minute task comes, the schedule continues the time consumption hour task since its remaining time is 10 seconds, shorter than 1 minute, so this algorithm is calledMinimum Remaining Time Task Limited Algorithm (SRTJ)It is possible to solve the dilemma of the short term task waiting for a short time consuming task in the FIFO algorithm.

Advantage:

  • Average response time is low: Here's a little because the task of time is unlimited, and the actual calculation average time task is the quick task, and the average response time statistic is inevitably low.

Disadvantage:

  • The time consuming task is delayed, unfair, easy to require

  • Frequent task switches, the additional overhead of scheduling

Applicable scene: There is almost no suitable scene.

3, time slice rotation algorithm (round robin)

Give each task in the queue a time film, the first task will be carried out first after the time tablets are turned on, insert this task to the end of the queue, switch to the next task, which can solve the problem of time consuming time - demonstrating Hunger in SJF. The algorithm is between FIFO and SJF. If the time slice is large enough, it will slide onto the FIFO. If the slice is small enough (assuming that the overhead of the task switch is not taken into account), the task completion time order is taken from small to large time constraints (compared to SFJ, the absolute time execution of the task will depend on the number of tasks in the queue).

Advantage:

  • Every task can get fair shipping

  • Time-consuming tasks can adjust execution even if they fall behind on a time-consuming task

Disadvantage:

  • The planning effort caused by the task switch is great, and it is necessary to change the context of the task (especially the cache of the CPU, several switches can easily lead to the cache, must be reloaded from memory, this is very time-consuming)

  • Time tablets are not very good settings (if the setting is short, the planning effort is great, if the setting is long, long, extreme case is broken down to the FIFO)

Applicable scene: the queue is consumed at the time of consumption (e.g. multi-channel video flow processing)

Inapplicable scene: compute tasks and queues for mixing I / O tasks

4, maximum minimum equity algorithm (Max-Min Fairness)

As the name suggests, this algorithm is designed to ensure fairness and is initially derived from bandwidth allocation and control scenarios of the communication network. Scenario example, a certain resource capacity is 10, the existing 4 users must use (A, B, C, D), the number of resources required is 2, 2,6, 4, 5 to pass several rounds:

  • The first round of the allocation, the 4-user party (A, B, C, D), participates in the allocation, each with an average of 2.5 resources, a single need for 2, another 0.5, b, c, d in this one Wheel is not enough use needs to be moved.

  • The second round of allotment, 3 user parties (B, c, d) participate in the allotment, 5 + 0.5 / 3 = 2.66666, B can only be 2.0, then 0.06666, then the remaining 2 Allocation, C, C, D is not enough to use it, I need to be moved.

  • The third round is assigned, 2 usage party (C, D) involved in the assignment, and 2.5 + 0.5 / 3 + 0.06666666 / 2 = 2.6999999, C and D were received separately from 2.6999 . (If there is a new resource later, it can be assigned C and D so that you can meet their needs)

In summary, the average is assigned. If there are many words, the remaining word is reassigned. If it is not enough, it is necessary to wait. In order to reflect its importance, it is introduced on this basis. With the maximum minimum and minimum weight algorithm scenario example, a certain resource capacity 16, gives A, B, C and d, respectively, the weight is 5, 8, 1, 2 (the minimum particle size of weight 1, the greater value, the higher the weight), the number of resources required they are 4, 2, 10, 4, need to pass several rounds:

  • The first round of distribution, the resources assigned to A, B, C and D are 5, 8, 1, 2, a remaining 1 resource, B remaining 6 resources, C, D due to resources, process extensions.

  • The second round is allocated in the total 7 resources of A and B according to the weight of C, D, 7 * 1/3, 7 * 2/3, plus the number of first rounds, c, d has 1 + 7/3 , 2 + 14/3, c still not met, d can be met, dI 2 + 14/3 - 4 = 2.6666, this can be used for c.

  • The third round is assigned, and c is currently 1 + 7/3 + 2.666666 = 6, not enough delays (if there is new resources later, they can be assigned to C so that it can meet his needs).

5 、 Multi-level feedback queue (MFQ)

This algorithm takes into account the response time (schedule overload, lack of hunger), fairness, in Windows, MacOS, Linux kernel scheduling system. Algorithm content:

  • There are several levels, from top to bottom, the priority is lower and lower, and the slice is growing.

  • The task with the high priority level can grab the task with the lower priority level.

  • The new task is initially on the high priority level. If a piece of time runs out when the task ends, it is normal to leave the system. If the task is not finished, it will drop to a low level. If it is waiting for I / O, the CPU is processing, which remains at the current level (or increasing a level).

  • The task of the same level adopts the Round Robin Algorithm.

  • To avoid too many I / O tasks in the system, the computation task is delayed and the MFQ algorithm monitors the processing of each task to ensure that it has a fair resource allocation (according to the largest minimum rate algorithm). With all the tasks for each level, if there is no task that has not been assigned to him, he will upgrade its priority accordingly, and it will reduce its priority.

6. Considering the MFQ algorithm in the multi-CPU atom scene

If in the multi-CPU core scenario, when an MFQ is shared, it appears:

  • As the number of CPU cores grows, the battle for the MFQ lock becomes more serious.

  • Due to the latest status of MFQ, there is a cache that exists as the Last-Execution CPU CORE. The current CPU core requires cache from the remote CPU core to pull the latest data and then store it in the local cache, this behavior takes time.

To solve the above problem, it is assigned an MFQ for each CPU core and use the affinity scheduling policy to ensure that the implementation of the same thread is as expected as a CPU core (running again after the interrupt). When some CPU cores are particularly busy, rebalancing operations are performed and a certain thread migrates to another idle CPU core execution. Threads can maximize the use of CPU cache to avoid frequent data to the CPU cache. When the Golang language wraps around multiple equations, try to ensure that the relevant scales are being processed by the same underlying thread and rely on recovering CPU cache considerations.

Exercise at the core operating system level

Inform the specific scheduling algorithm, the scheduling granularity is task (job) that can be assigned in threads. Is that the operating system uniformly dispatch all threads in the system in accordance with the above scheduling algorithm? The first also mentioned that the factors of the process should be taken into account during the actual schedule, namely the thread in the same process. There are two strategies to consider the process factors:

  • Gang Scheduling : Try to schedule the thread in the same process at the same time without randomly selecting threads from multiple processes.

  • Space Sharing : Share the CPU core. When there are multiple processes, only part of the process takes up part of the CPU core, rather than all CPU cores covered by the same process in a certain time slice.

to conclude

The planning system of the operating system kernel is very complicated, integrated, built-in algorithms, strategies and compromises, the above summary is just a fur and increasing!