Transcript Document
Kaplan University
IT320
OPERATING SYSTEM CONCEPTS
Unit 6: Processor Scheduling
September 2012
1
Upcoming Topics
2
Unit 6: Processor Scheduling
Unit 7: File Management
Unit 8: Computer Security Risks & Data Protection
Unit 9: Distributing Computing and Networking
Unit 10: Final Project
Due
Tuesday, October 30 by 11:59 pm ET
Kaplan University
Unit 6 Overview
3
Readings
Discussion Questions
Review Unit 6 Assignments
NOTE: You have 2 assignments this week
Submit each to separate dropbox
Lecture on Processor Scheduling
Preview of Final Project
Kaplan University
Unit 6: Reading & Assignments
4
Textbook Reading
Chapter 9 – Uniprocessor Scheduling
Chapter 10 – Multiprocessor & Real-Time Scheduling
Start with chapter summaries first!
Web Articles Reading
2 Discussion Questions
2 assignments – both papers
Kaplan University
Unit 6 - Discussion Question 1
5
Run the simulation found in the link below discuss your
experience with the simulation, observations, and your
thoughts on the different scheduling algorithms. Which is the
most efficient and why?
Use the following URL for the running the simulations.
http://courses.cs.vt.edu/~csonline/OS/Lessons/Processes/in
dex.html
Kaplan University
Unit 6 – Discussion Question 2
6
You are the designer of a multi-core processor
system.
Based on your knowledge of scheduling methods,
which would you recommend for a highly
multitasking system and why?
Kaplan University
Unit 6: Assignment 1
7
Compare the four processor scheduling algorithms:
First Come First Served (FCFS)
Round Robin (RR)
Shortest Process Next (SPN)
Shortest Remaining Time (SRT)
Write a 2-3 page paper defining all 4 algorithms and comparing
each algorithm.
Provide an assessment as to when you should use each type of
algorithm, or if some should never be used, and why.
You can use the results from the simulations to supplement your
paper.
Kaplan University
Unit 6: Writing Assignment 2
8
The Rate Monotonic Scheduling (RMS) algorithm is said to
resolve multitasking scheduling conflicts for periodic tasks.
Write a 1 page summary explaining how RMS works and
how it resolves scheduling conflicts.
Kaplan University
Unit 6: Grading Rubric
9
Assignment 1 (50 points possible)
15 pts – Explored the concept of Processor Scheduling
20 pts – Examined and compared scheduling algorithms, and provided assessment
of when each algorithm should be used.
15 pts – Used APA format, included title page, cited references correctly, and at
least 2 pages in length
Assignment 2 (20 points possible)
5 pts – Used APA format, included title page, cited references correctly, and at least
1 full page in length
10 pts – Included detailed description of RMS
5 pts – Demonstrated superior organization, is well ordered, logical and unified.
Kaplan University
10
Chapter 9 – UniProcessor Scheduling
Kaplan University
Processor Scheduling
11
The purpose of processor scheduling is “to assign processes to
be executed by the processor or processors over time, in a way
that meets system objectives, such as response time, throughput,
and processor efficiency” (Stallings, 2009, p. 406).
Kaplan University
Scheduling Criteria (p. 411)
12
User Oriented
Turnaround Time
Response Time
Deadlines
Predictability
System Oriented
Throughput
Processor Utilization
Fairness
Enforcing Priorities
Balancing Resources
Kaplan University
Scheduling Policies
13
First Come, First Served (FCFS)
Round Robin
Shortest Process Next (SPN)
Shortest Remaining Time (SRT)
Highest Response Ratio Next
Feedback
Kaplan University
Unit 6 Paper
14
Focus on these scheduling policies
First
Come, First Served (FCFS) (same as FIFO)
Round Robin
Shortest Process Next (SPN)
Shortest Remaining Time (SRT)
Textbook, p. 413 ****
Simulations will help you visualize the process
Also review p. 415 (overview of all policies)
Kaplan University
First-Come-First-Served
Each process joins the Ready queue
When the current process ceases to execute, the
longest process in the Ready queue is selected
First-Come-First-Served
A short process may have to wait a very long time
before it can execute
Favors CPU-bound processes
I/O
processes have to wait until CPU-bound process
completes
Round Robin
Uses preemption based on a clock
also
known as time slicing, because each process is
given a slice of time before being preempted.
Round Robin
Clock interrupt is generated at periodic intervals
When an interrupt occurs, the currently running
process is placed in the ready queue
Next
ready job is selected
Shortest Process Next
Nonpreemptive policy
Process with shortest expected processing time is
selected next
Short process jumps ahead of longer processes
Shortest Process Next
Predictability of longer processes is reduced
If estimated time for process not correct, the
operating system may abort it
Possibility of starvation for longer processes
Shortest Remaining Time
Preemptive version of shortest process next policy
Must estimate processing time and choose the
shortest
Highest Response Ratio Next
Choose next process with the greatest ratio
Feedback Scheduling
Penalize jobs that have been running longer
Don’t know
remaining
time process needs to execute
Feedback Performance
Variations exist, simple version pre-empts
periodically, similar to round robin
But
can lead to starvation
26
Chapter 10: Multiprocessor Scheduling
Kaplan University
Multiprocessor Scheduling
27
What do I mean by “multiprocessor”?
Can you name some chip examples?
Kaplan University
Multiprocessor Scheduling
28
Multiprocessors can be classified as one of
following:
Loosely
coupled or distributed multiprocessor,
(also known as cluster) (Chapter 16)
Functionally, specialized processors (Chapter 11)
Tightly coupled multiprocessors (Chapter 10)
Kaplan University
Scheduling Design Issues
Scheduling on a multiprocessor involves three
interrelated issues:
1.
2.
3.
Assignment of processes to processors
Use of multiprogramming on individual processors
Actual dispatching of a process
The approach taken will depend on the degree of
granularity of applications and the number of
processors available
Assignment of Processes to Processors
Assuming all processors are equal, it is simplest to
treat processors as a pooled resource and assign
process to processors on demand.
Should
the assignment be static or dynamic though?
Static Assignment
Permanently assign process to a processor
Dedicate
short-term queue for each processor
Less overhead
Allows the use of ‘group’ or ‘gang’ scheduling (see
later)
But may leave a processor idle, while others have a
backlog
Solution:
use a common queue
Dynamic Assignment
Threads
are moved for a queue for one processor to a
queue for another processor
Process Scheduling
Usually processes are not dedicated to processors
A single queue is used for all processes
Or multiple queues are used for priorities
All
queues feed to the common pool of processors
Specific scheduling discipline is important with two
or more processors than with one
Thread Scheduling
Threads execute separate from the rest of the
process
An application can be a set of threads that
cooperate and execute concurrently in the same
address space
Dramatic gains in performance are possible in multiprocessor systems
Compared
to running in uniprocessor systems
Multiprocessor Approaches
35
Load Sharing
Global
queue of threads
Each processor selects threads when idle
Gang Scheduling
Set
of related threads is scheduled to run on a set of
processors at the same time
Kaplan University
Multiprocessor Approaches
36
Dedicated Processor Assignment
Opposite
of load-sharing
Each program is allocated number of processes equal
to threads
Dynamic Scheduling
Number
of threads in process can be altered during
execution
Kaplan University
Rate Monotonic Scheduling
37
Rate Monotonic Scheduling (RMS)
RMS
assigns priorities to tasks
Highest
priority is task with shortest time to complete
Second highest priority is with second shortest time
Chapter
10 – pp. 476-478
Kaplan University
Additional Reading Sources
38
Intel.com
http://software.intel.com/en-us/articles/operating-
systems-issues/
http://www.intel.com/consumer/products/processors/c
ore-family.htm
Kaplan University
RMS Web Reading
39
Introduction to Rate Monotonic Scheduling
http://www.eetimes.com/discussion/beginner-s-corner/4023927/Introduction-to-RateMonotonic-Scheduling
What Every Engineer needs to know about Rate Monotonic Scheduling: A Tutorial
http://www.imd.uni-rostock.de/ma/gol/bsys/pdf/whatevereyeng.pdfa
(focus on first several pages of this article & ignore formulas)
Scheduling Algorithms
http://wiki.osdev.org/Scheduling_Algorithms
Kaplan University
40
Preview – Final Project
Kaplan University
Final Project
41
Due Tuesday, October 30
Final Project is worth 100 points
Write a 5-10 page essay explaining how a
mainstream modern (Linux or Windows) Operating
System is designed to integrate all components of the
operating system.
At least 3 outside references
Include topics on the next page
Kaplan University
Final Project
42
The following list of topics is a starting point for your essay.
You may include other topics if you feel they are important.
Processes and threads
Memory management
Scheduling (Including deadlock prevention)
File Management
Input and Output devices
Security issues
(Discuss current malware threats & prevention techniques)
Data protection (RAID & Clusters)
Kaplan University
Any Questions?
43
Pam Van Hook
Remember
2
assignments due this week
Start work on Final Project
Email: [email protected]
Kaplan University