Uniprocessor Scheduling
- Basic Concepts
- Scheduling Criteria
- Scheduling Algorithms
Uniprocessor Scheduling Basic Concepts Scheduling Criteria - - PowerPoint PPT Presentation
Uniprocessor Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms 2 Three level scheduling 3 Types of Scheduling g yp Long- and Medium-Term Schedulers g Long-term scheduler Determines which programs are
2
Types of Scheduling yp g
3
Long- and Medium-Term Schedulers g
Long-term scheduler
(ie to become processes) b d d f h h l d
Medium-term scheduler
– More processes, smaller percentage of time each process is executed executed
4
Short-Term Scheduler
upon
– Interrupts, Operating system calls, Signals, ..
to stop one process and start another running; the to stop one process and start another running; the dominating factors involve:
– switching context g – selecting the new process to dispatch
5
CPU–I/O Burst Cycle y
cycle of cycle of
–
CPU execution and – I/O wait.
– CPU-bound – IO-bound
6
Scheduling Criteria- Optimization goals g p g
CPU utilization – keep CPU as busy as possible Throughput – # of processes that complete their execution per time unit Response time – amount of time it takes from when a request was Response time amount of time it takes from when a request was submitted until the first response is produced (execution + waiting time in ready queue)
T d ti t f ti t t ti l – Turnaround time – amount of time to execute a particular process (execution + all the waiting); involves IO schedulers also
Fairness - watch priorities, avoid starvation, ... Scheduler Efficiency - overhead (e.g. context switching, computing priorities, …)
7
Nonpreemptive O i i th i t t it ill ti til it
terminates or blocks itself for I/O Preemptive
R d t t b th ti t Ready state by the operating system
the processor for very long p y g
8
First-Come-First-Served (FCFS) (FCFS)
5 10 15 20 5 10 15 20 A B C E D
wait very long before it can
9
wait very long before it can execute (convoy effect)
Round-Robin
5 5 10 15 20 A B C D E
ti b d l k (i t t ti
slice or quantum -q- usually 10-100 msec)
CPU time in chunks of at most q time units CPU time in chunks of at most q time units
– q large ⇒ FIFO ll h d b hi h d t
10
– q small ⇒ overhead can be high due to context switches
Shortest Process First
5 10 15 20 10 15 20 A B C D E
l longer processes
11
Shortest Remaining Time First g
5 10 15 20 5 10 15 20 A B C D E
version of shortest process next next
12
On SPF Scheduling
g s h gh throughput
for a given set of processes g p
– Proof (non-preemptive): analyze the summation giving the waiting time
– Can be done automatically (exponential averaging) – If estimated time for process (given by the user in a batch system) not correct the operating system may abort it system) not correct, the operating system may abort it
13
Determining Length of Next CPU Burst g g
exponential averaging exponential averaging.
burst CPU next the for value predicted 2 burst CPU
lenght actual 1. = = τ
1 th n
n t : Define 4. 1 , 3. burst CPU next the for value predicted 2. ≤ ≤
+
α α τ
1 n
( )
. t
n n n
τ α α τ − + =
=
1
1
14
On Exponential Averaging p g g
– τn+1 = τn – history does not count, only initial estimation counts
α 1 – τn+1 = tn – Only the actual last CPU burst counts.
τn+1 = α tn+(1 - α) α tn -1 + … +(1 α )j α t + +(1 - α )j α tn -i + … +(1 - α )n τ0
term has less weight than its predecessor.
15
Prediction of the Length of the Next CPU Burst
Priority Scheduling: General Rules y g
f l i it
– can be preemptive or non-preemptive – can have multiple ready queues to represent multiple level of – can have multiple ready queues to represent multiple level of priority
p y g , p y the predicted next CPU burst time.
p y p y never execute.
f h priority of the process.
17
queues eg queues, eg
foreground (interactive) background (batch)
algorithm, eg
foreground – RR b k d FCFS background – FCFS
queues.
– Fixed eg., serve all from foreground then from background. Possible starvation. – Another solution: Time slice – each queue t f ti f CPU ti t di id gets a fraction of CPU time to divide amongst its processes, eg. 80% to foreground in RR 20% t b k d i FCFS
18
20% to background in FCFS
Multilevel Feedback Queue Q
b t th i between the various queues; aging can be implemented this way implemented this way.
– number of queues – scheduling algorithm for each g g m f queue – method to upgrade a process th d t d t – method to demote a process – method to determine which queue a process will enter
20
q p first
Multilevel Feedback Queues Q
21
M t d t d l th d
library schedules user-level threads to run on LWP LWP
– Known as process-contention scope (PCS) since scheduling competition is within the process
system-contention scope (SCS) – competition among all threads in system
ith PCS SCS d i th d ti either PCS or SCS during thread creation
Fair-Share Scheduling
t nt on of mu t qu u s w th f ac priority recomputation
– application runs as a collection of processes (threads) – concern: the performance of the application, user-groups, … (ie. group of processes/threads) – scheduling decisions based on process sets rather than scheduling decisions based on process sets rather than individual processes
g g
23
Real-Time Scheduling
Real-Time Systems y
which occur in “real time”; process must be able to keep up e g which occur in real time ; process must be able to keep up, e.g.
– Control of laboratory experiments, Robotics, Air traffic control, Drive-by- wire systems, Tele/Data-communications, Military command and control systems y
the computation but also on the time at which the results are produced i.e. Tasks or processes come with a deadline (for starting or completion) Requirements may be hard or soft
25
Periodic Real-TimeTasks: Timing Diagram Timing Diagram
26
27
A movie may consist of several files
different for each movie (or other process that i i )
28
requires time guarantees)
Scheduling in Real-Time Systems Schedulable real-time system
– m periodic events m per od c events – event i occurs within period Pi and requires Ci seconds
Then the load can only be handled if
m i
Utilization =
1 i i
=
29
Scheduling with deadlines: Earliest Deadline First Earliest Deadline First
Set of tasks with deadlines is schedulable (i.e can be executed in a way that no process misses its deadline) iff EDF is a schedulable (aka feasible) no process misses its deadline) iff EDF is a schedulable (aka feasible)
30
Rate Monotonic Scheduling
Assigns priorities to tasks on the basis of their periods
31
EDF or RMS? (1)
32
EDF or RMS? (2) Another example of real time scheduling with RMS and EDF
33
Another example of real-time scheduling with RMS and EDF
EDF or RMS? (3)
1
m i
C ≤
0.7
1 i i
P
=
– (recall: for EDF that is up to 1)
– main reason: stability is easier to meet with RMS; priorities are static hence under transient period with deadline misses are static, hence, under transient period with deadline-misses, critical tasks can be “saved” by being assigned higher (static) priorities
34
– it is ok for combinations of hard and soft RT tasks
Scheduling
36
37
Not/hardly scalable
Possible solutions
38
Reduce network traffic by caching
behaviour (NUMA.
C h t ll /MMU d t i h th f i l l
remote
39
g , ( NUMA)
HW i i f lti l HW gives image of multiple processors per processor OS can be oblivious; but will benefit from knowing that it runs on such a HW it runs on such a HW
Reason for multicores: physical limitations can cause significant heat dissipation by high clock rate; instead, parallelize within the same chip! parallelize within the same chip! In addition to operating system (OS) In addition to operating system (OS) support, adjustments to existing software are required to maximize utilization of the computing resources provided by multi-core processors.
Intel Core 2 dual core processor, with CPU-local
Virtual machine approach again in focus
p Level 1 caches+ shared,
41
focus.
www.microsoft.com/licensing/highlights/multicore.mspx)
42
OS Design issues (1): Who executes the OS/scheduler(s)? Who executes the OS/scheduler(s)?
particular processor
– Master is responsible for scheduling; slave sends service request to th t the master – Disadvantages
– Each processor does self-scheduling Each processor does self scheduling – New issues for the operating system
43
Bus
44
Bus
Each CPU has its own operating system
45
Bus
Symmetric Multiprocessors
– SMP multiprocessor model
46
Recall: Tightly coupled multiprocessing (SMPs)
– Processors share main memory – Processors share main memory – Controlled by operating system
Different degrees of parallelism
– Independent and Coarse-Grained Parallelism
Medium Grained Parallelism – Medium-Grained Parallelism
– Fine-Grained Parallelism
47
Design issues 2: Assignment of Processes to Processors Assignment of Processes to Processors
P d l b l d Per-processor ready-queues vs global ready-queue
– Less overhead – A processor could be idle while another processor has a backlog
– can become a bottleneck can become a bottleneck – Task migration not cheap (cf. NUMA and scheduling)
Processor affinity process has affinity for processor on which it is currently running
– soft affinity – hard affinity
48
Multiprocessor Scheduling: per processor or per partition RQ per processor or per-partition RQ
– multiple threads at same time across multiple CPUs
49
Multiprocessor Scheduling: Load sharing / Global ready queue Load sharing / Global ready queue
– note use of single data structure for scheduling
50
g g
Multiprocessor Scheduling Load Sharing: a problem Load Sharing: a problem P bl ith i ti b t t th d
– both belong to process A b th i t f h s
51
– both running out of phase
Design issues 3: Multiprogramming on processors? Multiprogramming on processors?
Experience shows: Experience shows: – Threads running on separate processors (to the extend of dedicating a processor to a thread) yields dramatic gains in performance performance – Allocating processors to threads ~ allocating pages to ( ki d l?) processes (can use working set model?) – Specific scheduling discipline is less important with more than
important
52
1 Groups of related threads scheduled as a unit (a gang) 1. Groups of related threads scheduled as a unit (a gang) 2. All members of gang run simultaneously
3. All gang members start and end time slices together
53
Gang Scheduling: another option g g p
54
Multiprocessor Thread Scheduling Dynamic Scheduling Dynamic Scheduling
application application
parallelism
OS adjusts the load to improve use – OS adjusts the load to improve use
adjust # of threads. i d i i f i i i
55
Solution by architecture: hyperthreading Needs OS awareness though to get the corresponding efficiency
Summary: Multiprocessor Thread Scheduling y p g
Load sharing: processors/threads not assigned to particular Load sharing: processors/threads not assigned to particular processors
n ds central queue; m b b ttl n ck
processor; cache use is less efficient
G h d li A i th d t ti l Gang scheduling: Assigns threads to particular processors (simultaneous scheduling of threads that make up a process)
p y g y p application is not running (due to synchronization)
multiprogramming of processors) p g g p )
57
Kernel preemptible A la Multilevel feedback p mp by RT tasks in multiprocessors ( l feedback queues (unless interrupts disabled) disabled)
Two priority ranges: time sharing and real-time
One queue per processor/core
List of Tasks Indexed According to Priorities