Scheduling

Overview

Different goals. May optimize:

Introduction to Scheduling

Process Behavior

CPU-intensive versus I/O-intensive processes

Over time, processes are becoming more I/O bound: CPUs are improving faster than disks.

When to Schedule

Scheduling decisions can be made when:

  1. A new processes is created (fork)
  2. When a process exits
  3. When a process blocks on I/O, on a semaphore, etc.
  4. When an I/O interrupt occurs. This may be a clock interrupt.
    (Some sample interrupts.

Nonpremptive versus preemptive scheduling.
Do we ever stop a running process (one that has not blocked)? We need clock interrupts to do this.

Categories of Scheduling Algorithms

  1. Batch
  2. Interactive
  3. Real time

"Computer users are always in a big hurry."

Scheduling Algorithm Goals

Scheduling in Batch Systems

Think of who gets to checkout next at the grocery store.

First-Come, First Served

A FIFO

This is the typical situation at the grocery store: whoever gets in line first, gets to checkout first.
This form of scheduling is annoying when a person with a giant shopping cart full of groceries arrives just before you, and you are only buying one item. You wait a long time to check out, whereas if the system has let you go first, the person with the full shopping cart only would have been delayed a few seconds.

Shortest Job First

Reduces average time to completion.

Analogy: the person with the full shopping cart lets the person with only one item go ahead of them.

Shortest Remaining Time Next

Like the above, but looks at time left. The OS has to know a lot about the processes being run!

Scheduling in Interactive Systems

Round-Robin Scheduling

Round-robin scheduling

Each process only runs for a certain quantum of time, and then is halted, and then the next process in a list of processes is activated.

Setting the quantum correctly is the tricky bit: if it is set to too small an amount of time, the system will spend too much time swapping processes in and out of being active. If the quantum is set too high, then a crucial process may not get the CPU soon enough.

Priority Scheduling

Higher prority jobs run first.

We could have priority classes, and run round-robin within the classes.

Downside: a high-priority process may hog the CPU for extremely long lengths of time, starving low-priority processes.

Example: Linux scheduling (by Denny Matthew).

Multiple Queues

Don't worry about this one.

Shortest Process Next

In an interactive system, the OS can try to estimate which thing the user has clicked done, typed, etc., that will take the least time to run, and then run that shortest time process next.

So, if the user has clicked on two buttons, one of which launches a re-coloring of an hour-long video, and the other of which displays a list of files in the current directory, the OS runs the second process first.

Guaranteed Scheduling

We might guarantee each process an equal share of the CPU time. We then choose which process to run based on which one is the most "behind" in getting its guaranteed share. It then runs until it passes at least one other process in terms of getting its guaranteed share. Then the OS switches to that process, since it is now the one most behind.

Lottery Scheduling

Give processes "lottery tickets" in proportion to how much CPU time they need.

This way, the low priority processes get the CPU, but only in proportion to the number of lottery tickets they "own." They will not have to wait forever if some high-priority process is hogging the CPU.

Cooperating processes may exchange tickets in cases where that makes sense.

Fair-Share Scheduling

Allocates CPU time by user, not by process. No one person can hog all of the CPU simply because they have kicked off multiple processes.

Scheduling in Real-Time Systems

Real-time systems are just an extreme case of interactive systems.

Policy Versus Mechanism

The mechanism for scheduling may be in the kernel, but the policy may be set by a user process setting parameters, e.g., a parent may set its children's priority level.

This can be helpful if the parent process has information about it's children that the OS lacks.

Thread Scheduling

Usually, threads are scheduled at the user level.
No clock interrupt: must cooperate!
But everything is in a single process, so cooperation should be easier than between processes.
It is good coding practice to yield() occasionally.
But if threads behave badly, only one process (the one containing those threads) is harmed. The OS continues scheduling processes in general the same way as always.

Credits

External Links