lecture11-sep22

Download Report

Transcript lecture11-sep22

Operating Systems
CMPSC 473
Processes (5)
September 22 2008 - Lecture 11
Instructor: Bhuvan Urgaonkar
Announcements
• Quiz 1 will be out tonight and due in a week
• Suggested reading: Chapter 4 of SGG
• If you want to do more work/learn more things
– Get in touch with us
– We can provide more work in your projects
•
•
•
•
Honors credits?
Honors thesis?
Just for fun?!
Impress me and get good letters when you apply to grad school?
•
Done
Overview of ProcessHow a process is born
related Topics
– Parent/child relationship
– fork, clone, …
• How it leads its life
– Loaded: Later in the course
– Executed
• CPU scheduling
• Context switching
• Where a process “lives”: Address space
– OS maintains some info. for each process: PCB
– Process = Address Space + PCB
• How processes request services from the OS
– System calls
• How processes communicate
• Some variants of processes: LWPs and threads
• How processes die
Done
Done
Partially done
Start today
The notion of a thread
code data files
registers
heap
stack
• Roughly, a flow of execution that is
a basic unit of CPU utilization
– E.g., a process, a KCP
• Note: this is not showing an address
space (Fig. 4.1 from GGN)
thread
A single-threaded process
Multi-process Applications
•
Many applications need to do multiple activities simultaneously
– E.g., Web browser
•
•
•
•
Parse requested URL and find IP address from DNS server
Use system calls to send request to some Web server
Receive response
Assemble response and display it
– Can you give another example?
•
Solution #1: Write multi-process application as follows:
– forks off multiple processes, each responsible for a certain “flow of execution”
• Programmer’s choice/decision
– Employ IPC mechanisms for these processes to communicate (coming up soon)
• We already know about signals, how many have used pipes? Shared memory?
– Employ synchronization (ccoming up in a few lectures)
•
We would like these “flows of execution” (and not just the initiating process or
the entire application) to be the basic unit across which CPU is partitioned
(schedulable entity)
– Why?
– What about resources other than the CPU? (Will discuss this in a little while)
– The OS design we have studied so far already achieves this
Multi-process Applications
•
Many applications need to do multiple activities simultaneously
– E.g., Web browser
•
•
•
•
Parse requested URL and find IP address from DNS server
Use system calls to send request to some Web server
Receive response
Assemble response and display it
– Can you give another example?
•
Approach #1: Write multi-process application as follows:
– forks off multiple processes, each responsible for a certain “flow of execution”
• Programmer’s choice/decision
– Employ IPC mechanisms for these processes to communicate (coming up soon)
• We already know about signals, how many have used pipes? Shared memory?
– Employ synchronization (coming up in a few lectures)
•
We would like these “flows of execution” (and not just the initiating process or
the entire application) to be the basic unit across which CPU is partitioned
(schedulable entity)
– Why?
– What about resources other than the CPU? (Will discuss this in a little while)
– The OS design we have studied so far already achieves this
Multi-process Applications
•
Many applications need to do multiple activities simultaneously
– E.g., Web browser
•
•
•
•
Parse requested URL and find IP address from DNS server
Use system calls to send request to some Web server
Receive response
Assemble response and display it
– Can you give another example?
•
Approach #1: Write multi-process application as follows:
– forks off multiple processes, each responsible for a certain “flow of execution”
• Programmer’s choice/decision
– Employ IPC mechanisms for these processes to communicate (coming up soon)
• We already know about signals, how many have used pipes? Shared memory?
– Employ synchronization (coming up in a few lectures)
•
We would like these “flows of execution” (and not just the initiating process or
the entire application) to be the basic unit across which CPU is partitioned
(schedulable entity)
– Why?
– What about resources other than the CPU? (Will discuss this again for VMM and IO)
– The OS design we have studied so far already achieves this
Approach #1: Writing a
multi-process Application
code data files heap
code data files heap
code data files heap
code data files heap
registers stack
registers stack
registers stack
registers stack
URL parsing process
Network sending process
Network reception process
In virtual
memory
Interprets response, composes media
together and displays on browser screen
• E.g., a Web browser
• What’s wrong with (or lacking in) this approach to programming?
– Hint: Approach #1 has performance problems, although it is great for the programmer (why?)
•
Potentially lot of redundancy in code and data segments!
– Virtual memory wastage => More contention for precious RAM
Approach #1: Writing a
multi-process Application
code data files
heap
registers stack
URL parsing process
code data files
heap
registers stack
Network sending process
code data files
heap
registers stack
code data files
heap
registers stack
Network reception process
In virtual
memory
Interprets response, composes media
together and displays on browser screen
• E.g., a Web browser
• What’s wrong with (or lacking in) this approach to programming?
– Hint: Approach #1 has performance problems, although it is great for the programmer (why?)
•
Potentially, lot of redundancy in code and data segments!
– Virtual memory wastage => More contention for precious RAM => More work for the
memory manager => Reduction in computer’s throughput
Approach #2: Share code,
data, files!
code
registers stack
URL parsing process
heap
registers stack
Network sending process
data
registers stack
Network reception process
files
registers stack
In virtual
memory
Interprets response, composes media
together and displays on browser screen
• E.g., a Web browser
•
Share code, data, files (mmaped), via shared memory mechanisms (coming up)
– Burden on the programmer
•
Better yet, let kernel or a user-library handle sharing of these parts of the address spaces
and let the programmer deal with synchronization issues
– User-level and kernel-level threads
Approach #3: User or kernel support to
automatically share code, data, files!
code
registers stack
heap
registers stack
data
registers stack
files
registers stack
In virtual
memory
threads
URL parsing process
Network sending process
Network reception process
Interprets response, composes media
together and displays on browser screen
• E.g., a Web browser
•
Share code, data, files (mmaped), via shared memory mechanisms (coming up)
– Burden on the programmer
•
Better yet, let kernel or a user-library handle sharing of these parts of the address spaces
and let the programmer deal only with synchronization issues
Approach #3: User or kernel support to
automatically share code, data, files!
code
registers stack
heap
registers stack
data
registers stack
files
registers stack
In virtual
memory
threads
URL parsing process
Network sending process
Network reception process
Interprets response, composes media
together and displays on browser screen
• E.g., a Web browser
•
Share code, data, files (mmaped), via shared memory mechanisms (coming up)
– Burden on the programmer
•
Better yet, let kernel or a user-library handle sharing of these parts of the address spaces
and let the programmer deal with synchronization issues
– User-level and kernel-level threads
Multi-threading Models
• User-level thread libraries
– E.g., the one provided with Project 1
– Implementation: You are expected to gain this
understanding as you work on Project 1
– Pop quiz: Context switch overhead smaller. Why?
– What other overheads are reduced? Creation? Removal?
• Kernel-level threads
• There must exist some relationship between user
threads and kernel threads
– Why?
• Which is better?
Multi-threading Models: Many-to-one Model
User thread
k
Kernel thread
• Thread management done by user library
• Context switching, creation, removal, etc. efficient (if designed
well)
• Blocking call blocks the entire process
• No parallelism on uPs? Why?
• Green threads library on Solaris
Multi-threading Models: One-to-many Model
User thread
k
k
k
k
Kernel thread
• Each u-l thread mapped to one k-l thread
• Allows more concurrency
– If one thread blocks, another ready thread can run
– Can exploit parallelism on uPs
• Popular: Linux, several Windows (NT, 2000, XP)
Multi-threading Models: Many-to-many Model
User thread
k
k
k
• # u-l threads >= #k-l threads
• Best of both previous approaches?
Kernel thread