Transcript ppt

System Structuring with Threads
Example: A Transcoding Web Proxy Appliance
clients
“Proxy”
Interposed between Web (HTTP) clients and servers.
Masquerade as (represent) the server to the client.
Masquerade as (represent) the client to the server.
Cache
Store fetched objects (Web pages) on local disk.
Reduce network overhead if objects are fetched again.
Transcoding
“Distill” images to size/resolution that’s right for client.
Encrypt/decrypt as needed for security on the Internet.
Appliance
servers
Serves one purpose only; no general-purpose OS.
Using Threads to Structure the Proxy Server
long-term periodic threads
network driver
gather statistics
“scrub” cache for expired (old) objects
HTTP request handler
distill
encrypt
object cache
manager
disk driver
scrubber
stats
logging
worker threads for specific objects
distiller compresses/shrinks images
encrypt/decrypt
device controller threads
logging thread
one thread for each disk
one thread for network interface
server threads
request handlers
Thread Family Tree for the Proxy Server
network driver
HTTP request handler
distill
encrypt
file/cache
manager
scrubber
stats
logging
disk driver
main thread; waiting for child termination
periodic threads; waiting for timer to fire
server threads; waiting on queues of data messages
or pending requests (e.g., device interrupts)
worker threads; waiting for data to be produced/consumed
Periodic Threads and Timers
The scrubber and stats-gathering threads must wake up
periodically to do their work.
These “background” threads are often called daemons or sleepers.
scrubber
stats
AlarmClock::Pause (int howlong);
while (systemActive) {
do my work;
alarm->Pause(10000);
}
/* called by waiting threads */
Puts calling thread to sleep.
Maintains a collection of threads waiting for time to pass.
AlarmClock::Tick();
/* called by clock interrupt handler */
Wake up any waiting threads whose wait times have elapsed.
Interfacing with the Network
sending
receiving
TCP/IP protocol stack
NIC device driver
host memory
buffer pool
I/O Bus
Network
Interface Card
NetTx
NetRcv
Network Link
Network Reception
HTTP request handler
while (systemActive) {
packetArrival->P();
disable interrupts;
pkt = GetRcvPacket();
enable interrupts;
HandleRcvPacket(pkt);
}
This example illustrates use of a semaphore
by an interrupt handler to pass incoming
data to waiting threads.
TCP/IP reception
packetArrival->P()
packetArrival->V()
receive interrupt
handler
interrupt
Inter-Thread Messaging with Send/Receive
network receive
file/cache manager
HTTP request handler
network send
get request for object from thread;
while(more data in object) {
read data from object;
thread->send(data);
}
while (systemActive) {
object = GetNextClientRequest();
find object in cache or Web server
while(more data in object) {
currentThread->receive(data);
transmit data to client;
}
}
This example illustrates use of blocking send/receive primitives to pass a stream of
messages or commands to a specific thread, connection, or “port”.
Request/Response with Send/Receive
HTTP
request
handler
Thread* cache;
....
cache->send(request);
response = currentThread->receive();
...
file/cache
manager
while(systemActive) {
currentThread->receive(request);
...
requester->send(response);
}
The Need for Multiple Service Threads
Each new request will involve a stream of messages passing through
dedicated server thread(s) in each service module.
But what about new requests flowing into the system?
A system with single-threaded service modules could only handle one
request at a time, even if most time is spent waiting for slow devices.
network
HTTP request handler
file/cache manager
Solution: multi-threaded
service modules.
Using Ports for Multithreaded Servers
HTTP
request
handler
Port* cachePort
....
cachePort->send(request);
response = currentThread->receive();
...
file/cache
manager
while(systemActive) {
cachePort->receive(request);
...
requester->send(response);
}
Producer/Consumer Pipes
char inbuffer[1024];
char outbuffer[1024];
while (inbytes != 0) {
inbytes = input->read(inbuffer, 1024);
outbytes = process data from inbuffer to outbuffer;
output->write(outbuffer, outbytes);
}
file/cache
manager
network
This example illustrates one important use of the producer/consumer bounded buffer in Lab #3.
Forking and Joining Workers
/* give workers their input */
distiller->Send(input);
decrypter->Send(pipe);
HTTP handler
input
pipe
distiller
output
decrypter
distiller = new Thread();
distiller->Fork(Distill());
decrypter = new Thread();
decrypter->Fork(Decrypt());
pipe = new Pipe();
/* give workers their output */
distiller->Send(pipe);
decrypter->Send(output);
/* wait for workers to finish */
distiller->Join();
decrypter->Join();
A Serializer for Logging
disk driver
Multiple threads enqueue log records on a single queue without blocking for log write
completion; a single logging thread writes the records into a stream, so log records are
not interleaved.
Summary of “Paradigms” for Using Threads
• main thread or initiator
• sleepers or daemons (background threads)
• I/O service threads
listening on network or user interface
• server threads or Work Crews
waiting for requests on a message queue, work queue, or port
• filters or transformers
one stage of a pipeline processing a stream of bytes
• serializers
Threads vs. Events
Review: Thread-Structured Proxy Server
network driver
HTTP request handler
distill
encrypt
file/cache
manager
scrubber
stats
logging
disk driver
main thread; waiting for child termination
periodic threads; waiting for timer to fire
server threads; waiting on queues of data messages
or pending requests (e.g., device interrupts)
worker threads; waiting for data to be produced/consumed
Summary of “Paradigms” for Using Threads
• main thread or initiator
• sleepers or daemons (background threads)
• I/O service threads
listening on network or user interface
• server threads or Work Crews
waiting for requests on a message queue, work queue, or port
• filters or transformers
one stage of a pipeline processing a stream of bytes
• serializers
Thread Priority
Many systems allow assignment of priority values to threads.
Each job in the ready pool has an associated priority value;the
scheduler favors jobs with higher priority values.
• Assigned priorities reflect external preferences for particular
users or tasks.
“All jobs are equal, but some jobs are more equal than others.”
• Example: running user interface threads (interactive) at
higher priority improves the responsiveness of the system.
• Example: Unix nice system call to lower priority of a task.
• Example: Urgent tasks in a real-time process control system.
Keeping Your Priorities Straight
Priorities must be handled carefully when there are
dependencies among tasks with different priorities.
• A task with priority P should never impede the progress of a
task with priority Q > P.
This is called priority inversion, and it is to be avoided.
• The basic solution is some form of priority inheritance.
When a task with priority Q waits on some resource, the holder
(with priority P) temporarily inherits priority Q if Q > P.
Inheritance may also be needed when tasks coordinate with IPC.
• Inheritance is useful to meet deadlines and preserve lowjitter execution, as well as to honor priorities.
Multithreading: Pros and Cons
Multithreaded structure has many advantages...
Express different activities cleanly as independent thread bodies,
with appropriate priorities.
Activities succeed or fail independently.
It is easy to wait/sleep without affecting other activities: e.g., I/O
operations may be blocking.
Extends easily to multiprocessors.
...but it also has some disadvantages.
Requires support for threads or processes.
Requires more careful synchronization.
Imposes context-switching overhead.
May consume lots of space for stacks of blocked threads.
Alternative: Event-Driven Systems
Structure the code as a single thread that
responds to a series of events, each of which
carries enough state to determine what is
needed and “pick up where we left off”.
The thread continuously polls for new events,
whenever it completes a previous event.
If handling some event requires waiting for
I/O to complete, the thread arranges for
another event to notify it of completion, and
keeps right on going, e.g., asynchronous
non-blocking I/O.
Question: in what order should events be
delivered?
while (TRUE) {
event = GetNextEvent();
switch (event) {
case IncomingPacket:
HandlePacket();
break;
case DiskCompletion:
HandleDiskCompletion();
break;
case TimerExpired:
RunPeriodicTasks();
etc. etc. etc.
}
Example: Unix Select Syscall
A thread/process with multiple network connections or open
files can initiate nonblocking I/O on all of them.
The Unix select system call supports such a polling model:
• files are identified by file descriptors (open file numbers)
• pass a bitmask for which descriptors to query for readiness
• returns a bitmask of descriptors ready for reading/writing
• reads and/or writes on these descriptors will not block
Select has fundamental scaling
limitations in storing, passing,
and traversing the bitmaps.
Event Notification with Upcalls
Problem: what if an event requires a more “immediate”
notification?
• What if a high-priority event occurs while we are executing
the handler for a low-priority event?
• What about exceptions relating to the handling of an event?
We need some way to preemptively “break in” to the
execution of a thread and notify it of events.
upcalls
example: NT Asynchronous Procedure Calls (APCs)
example: Unix signals
Preemptive event handling raises synchronization issues similar
to interrupt handling.
Example: Unix Signals
Signals notify processes of internal or external events.
• the Unix software equivalent of interrupts/exceptions
• only way to do something to a process “from the outside”
• Unix systems define a small set of signal types
Examples of signal generation:
• keyboard ctrl-c and ctrl-z signal the foreground process
• synchronous fault notifications, syscall errors
• asynchronous notifications from other processes via kill
• IPC events (SIGPIPE, SIGCHLD)
• alarm notifications
signal == “upcall”
Handling Unix Signals
1. Each signal type has a system-defined default action.
abort and dump core (SIGSEGV, SIGBUS, etc.)
ignore, stop, exit, continue
2. A process may choose to block (inhibit) or ignore some
signal types.
useful for synchronizing with signal handlers: inhibit signals
before executing code shared with the signal handler
3. The process may choose to catch some signal types by
specifying a (user mode) handler procedure.
system passes interrupted context to handler
handler may munge and/or return to interrupted context
Summary
1. Threads are a useful tool for structuring complex systems.
Separate the code to handle concurrent activities that are
logically separate, with easy handling of priority.
Interaction primitives integrate synchronization, data transfer,
and possibly priority inheritance.
2. Many systems include an event handling mechanism.
Useful in conjuction with threads, or may be viewed as an
alternative to threads structuring concurrent systems.
Examples: Unix signals, NT APCs, GetNextEvent()
3. Event-structured systems may require less direct handling
of concurrency.
But must synchronize with handlers if they are preemptive.