# Scheduling algorithms

This is the printing format that you might want to have with you in class

• The burst cycles of a process vary over time: compute, do I/O, compute, do I/O... CPU bursts (intervals with no I/O usage) are typically short.
• Waiting time: sum of time waiting in ready queue

## 1. FCFS: First Come, First Served

Non-preemptive, treats ready queue as a FIFO (first-in-first-out) queue.

Simple, but typically long/varying waiting time.

1.1. Example:
Process Burst time
P1 24
P2 3
P3 3

All Pi arrive at time 0 in order P1, P2, P3.

Gannt chart:

0-23 24-26 27-30
P1 P2 P3

Average waiting time: (0+24+27)/3 = 17

If they arrive in order P2, P3, P1:

0-2 3-5 6-30
P2 P3 P1

Average waiting time: (0+3+6)/3 = 3

1.2. Convoy effect:

Consider P2, P3 being I/O bound, and P1 being CPU bound:

P2, P3 quickly give up the CPU after issuing their I/O request (and end up in waiting queue), then P1 holds the CPU for long time. I/O requests are finished long before the burst of P1, and thus P2, P3 wait in the ready queue for a long time.

One cause: FCFS is non-preemptive: P1 keeps the CPU as long as it needs.

## 2. SJF: Shortest Job First

Knowing the length of the next CPU burst of each process in RQ, give CPU to the one with the shortest next burst (use FCFS if equal).

Better name: shortest next cpu burst first.

2.1. Example:
Process Burst
P1 6
P2 8
P3 7
P4 3

Gantt chart:

0-2 3-8 9-15 16-24
P4 P1 P3 P2

Average waiting time: (0+3+9+16)/4 = 7

Cf. FCFS for this example: 10.25

Optimal wrt. waiting time!

Problem: how to know the next burst?

• User specifies (e.g. for batch system)
• Guess/predict based on earlier bursts, using exponential average ("relative weight" between recent and earlier bursts)

## 3. Preemptive SJF: SRTF

Shortest remaining time first:

When a process arrives to RQ, sort it in and select the SJF including the running process, possibly interrupting it. (Remember: SJF schedules a new process only when the running is finished.)

3.1. Example:
Process Arrival Burst
P1 0 8
P2 1 4
P3 2 9
P4 3 5

Gantt chart:

0 1-4 5-9 10-16 17-25
P1 P2 P4 P1 P3

Average waiting time: ((10-1) + (1-1) + (17-2) + (5-3))/4 = 6.5

Cf. SJF: 7.75

## 4. Generalise: priority scheduling

Give each process a priority, select the one with the highest (if equal, FCFS).
Note: sometimes (e.g. in book) low value means high priority.

Preemptive or not, fix or variable priority.

4.1. Internal priority

Based on memory requirements, I/O or CPU burst lengths, etc.

Example: SJF: prio = 1/(next burst)

4.2. External priority

"Importance" (e.g. system processes), money paid...

4.3. Problem: starvation

Low-priority process on system with high load (of proc with higher prio) may never run (or much later). MIT rumour: shutdown of system in 1974 found process submitted in 1967 which had never run.

A solution: let process priority increase with age. The longer it has waited, the higher its prio grows, and eventually it gets to run (and drop priority).

## 5. RR: Round-Robin

"FCFS with preemption". RQ is a FIFO, but each process has a time quantum (10-100ms). New processes are added to the end of the RQ.

Select the first process in RQ, set a timer to expire at its quantum.

• Either the process suspends on I/O, the timer is cancelled and a new process is scheduled,
• or the timer expires, the running process is moved to the end of the RQ, and a new one is scheduled.

Waiting time typically long, but fair:

• no process waits more than (n-1)*q for n processes and quantum q.
• each process gets 1/n of the CPU time

Turnaround time typically larger than SRTF, but better response time. Turnaround time does not necessarily improve with larger quantum.

Performance depends on q:

• small q => much overhead due to context switches (and scheduling)
q should be large wrt context-switching time. Typical q 10-100 ms, typical context switch 10 us.
• large q => FCFS
rule of thumb: 80% of bursts should be shorter than q (also improves turnaround time).
5.1. Example
Process Burst
P1 24
P2 3
P3 3

q=4 gives average waiting time 5.66 (cf FCFS 3-17)

## 6. Multi-Level Queue scheduling

Different algorithms suit different types of processes (cf interactive and batch/background processes) - and systems are often not only running interactive or "batch" processes.

Multi-level queue scheduling splits RQ in several, each with its own scheduling algorithm, e.g.

• interactive processes: RR
• background processes: FCFS/SRTF

Need scheduling between the RQs: e.g fix-priority preemptive with priority to interactive processes.

More complex example:

1. System processes
2. Interactive
3. Interactive editing
4. Batch
5. Student processes

If a lower-prio queue is only used when higher-prio RQs are empty (and higher-prio processes preempt lower-prio), we risk starvation. Possible solution: give time-slices to each RQ - basically RR between the qs, with different quanta for each queue. Thus each queue gets a certain guaranteed slice of the CPU time.

6.1. MLFQ: Multi-Level Feedback Queue scheduling

With MLQ, normally each process is permanently assigned to one queue (based on type, priority etc). Generalise: add feedback, and allow processes to move between queues.

Example: let processes with long CPU bursts move down in the queues. Then I/O bound and interactive processes will be given priority - combine with aging principle to prevent starvation.

How can this be implemented?

6.2. Example:

Q1: Round-Robin with quanta 8
Q2: RR with quanta 16
Q3: FCFS

Qi has priority over, and preempts, Qi+1. New processes are added to Q1.

If a process in Q1 or Q2 does not finish within its quanta, it is moved down to the next queue.

Thus:

• short bursts (I/O bound and interactive proc) are served quickly;
• slightly longer are also served quickly but with less priority;
• long (CPU bound processes) are served when there is CPU to be spared.

An MLFQ scheduler is very general, and can be parameterised by

• the number of queues
• scheduling algorithm for each queue
• method for moving a process UP
• method for moving a process DOWN
• method to select the initial queue for a process
• scheduling algorithm between the queues (time slices, priority, preemption, aging...)

Deciding the parameters optimally is usually difficult.

## 7. Multi-processor scheduling

Load sharing possible with multiple processors.

Different kinds of systems require different solutions:

• homogenous vs heterogenous (more loosely coupled)
• asymmetric vs symmetric multiprocessing (SMP)
7.1. Asymmetric:

Typically one processor does the scheduling (and perhaps I/O) and gives the others the work - less sharing, easier problem. But less efficient and less fault tolerant.

7.2. Symmetric:

Each processor does its own scheduling, using

• private queue: more problems with load sharing/balancing (but usually what's used)
• shared queue: easy load sharing, (more) problems of data sharing and synchronization (as we will see)

Processor affinity: (ung. samhörighet) try to keep a process on the same processor as last time, since its data is more probably in the cache. Soft affinity: the process may move to another processor; hard affinity: the process must stay on the same processor.