The handouts for the lecture can be downloaded here
This is the printing format that you might want to have with you in class
Non-preemptive, treats ready queue as a FIFO (first-in-first-out) queue.
Simple, but typically long/varying waiting time.
All Pi arrive at time 0 in order P1, P2, P3.
Average waiting time: (0+24+27)/3 = 17
If they arrive in order P2, P3, P1:
Average waiting time: (0+3+6)/3 = 3
Consider P2, P3 being I/O bound, and P1 being CPU bound:
P2, P3 quickly give up the CPU after issuing their I/O request (and end up in waiting queue), then P1 holds the CPU for long time. I/O requests are finished long before the burst of P1, and thus P2, P3 wait in the ready queue for a long time.
One cause: FCFS is non-preemptive: P1 keeps the CPU as long as it needs.
Knowing the length of the next CPU burst of each process in RQ, give CPU to the one with the shortest next burst (use FCFS if equal).
Better name: shortest next cpu burst first.
Average waiting time: (0+3+9+16)/4 = 7
Cf. FCFS for this example: 10.25
Optimal wrt. waiting time!
Problem: how to know the next burst?
Shortest remaining time first:
When a process arrives to RQ, sort it in and select the SJF including the running process, possibly interrupting it. (Remember: SJF schedules a new process only when the running is finished.)
Average waiting time: ((10-1) + (1-1) + (17-2) + (5-3))/4 = 6.5
Cf. SJF: 7.75
Give each process a priority, select the one with the highest (if equal, FCFS).
Note: sometimes (e.g. in book) low value means high priority.
Preemptive or not, fix or variable priority.
Based on memory requirements, I/O or CPU burst lengths, etc.
Example: SJF: prio = 1/(next burst)
"Importance" (e.g. system processes), money paid...
Low-priority process on system with high load (of proc with higher prio) may never run (or much later). MIT rumour: shutdown of system in 1974 found process submitted in 1967 which had never run.
A solution: let process priority increase with age. The longer it has waited, the higher its prio grows, and eventually it gets to run (and drop priority).
"FCFS with preemption". RQ is a FIFO, but each process has a time quantum (10-100ms). New processes are added to the end of the RQ.
Select the first process in RQ, set a timer to expire at its quantum.
Waiting time typically long, but fair:
Turnaround time typically larger than SRTF, but better response time. Turnaround time does not necessarily improve with larger quantum.
Performance depends on q:
q=4 gives average waiting time 5.66 (cf FCFS 3-17)
Different algorithms suit different types of processes (cf interactive and batch/background processes) - and systems are often not only running interactive or "batch" processes.
Multi-level queue scheduling splits RQ in several, each with its own scheduling algorithm, e.g.
Need scheduling between the RQs: e.g fix-priority preemptive with priority to interactive processes.
More complex example:
If a lower-prio queue is only used when higher-prio RQs are empty (and higher-prio processes preempt lower-prio), we risk starvation. Possible solution: give time-slices to each RQ - basically RR between the qs, with different quanta for each queue. Thus each queue gets a certain guaranteed slice of the CPU time.
With MLQ, normally each process is permanently assigned to one queue (based on type, priority etc). Generalise: add feedback, and allow processes to move between queues.
Example: let processes with long CPU bursts move down in the queues. Then I/O bound and interactive processes will be given priority - combine with aging principle to prevent starvation.
How can this be implemented?
Q1: Round-Robin with quanta 8
Q2: RR with quanta 16
Qi has priority over, and preempts, Qi+1. New processes are added to Q1.
If a process in Q1 or Q2 does not finish within its quanta, it is moved down to the next queue.
An MLFQ scheduler is very general, and can be parameterised by
Deciding the parameters optimally is usually difficult.
Load sharing possible with multiple processors.
Different kinds of systems require different solutions:
Typically one processor does the scheduling (and perhaps I/O) and gives the others the work - less sharing, easier problem. But less efficient and less fault tolerant.
Each processor does its own scheduling, using
Processor affinity: (ung. samhörighet) try to keep a process on the same processor as last time, since its data is more probably in the cache. Soft affinity: the process may move to another processor; hard affinity: the process must stay on the same processor.
Load balancing and process migration:
Keep the workload evenly distributed over the processors (problem with private queues). Need to migrate (move) a process from one to another.
Often both variants are used (e.g. in Linux). Must be balanced against affinity.
CPUs with multiple "cores": sharing cache and bus influences affinity concept and thus scheduling. The OS can view each core as a CPU, but can make additional benefits with threads.
These notes are originally from Björn Victor. Tack.