The Third Extended File System with Copy-on

Download Report

Transcript The Third Extended File System with Copy-on

Lecture 6
Concurrent Java
EN 600.320/420
Instructor: Randal Burns
20 February 2012
Department of Computer Science, Johns Hopkins University
A Basic Concurrent Program

Objects/Classes
–
–

Runnnable
Thread
Methods
–
–
Runnable.run()
Thread.start()
Lecture 3: Parallel Architectures
See SimpleConc.java
Thread versus Runnable

Multiple ways to create threads
–
–

Runnable: is an interface
–

Allows for inheritance from other classes
Thread is a class
–
–

Implement Runnable interface
Inherit from/extend thread
Cannot extend Thread and another class, e.g. Applet
Should only inherit when you want to extend base class’
functionality
Use Runnable: it’s preferable
Lecture 3: Parallel Architectures
Anonymous Objects?

Simple example uses anonymous object
–
–
No performance value
Less code, but of little simplifying value



RB: shorter code is often not better code
e.g. compilers are really good at removing unused names,
parsing {}’s, etc
Thread object variables allow for the manipulation of
thread objects
Lecture 3: Parallel Architectures
Awaiting thread completion

Join() a thread
–
–
–
See VolatileWorks.java
Await for it to finish
Return immediately if already finished
Finished threads are not GCed, because of the reference
Lecture 3: Parallel Architectures
Synchronization Constructs

A real simple first look at synchronization
–





Some dos and don’ts
The volatile declaration specifier
Synchronized blocks
java.util.concurrent.atomic
wait() and notify()
ReentrantLock: condition variables
Lecture 3: Parallel Architectures
Some Definitions

Atomic: the all or nothing property
–
–
–

In transactions, either all actions happen or none happen
For sequential programs it refers to operations: an operation is
executed by a processor as an indivisible unit that cannot be
interrupted.
java.util.concurrent.atomic: not really atomic, but lock-free,
thread safe encapsulation of fundamental types
Synchronize: poorly defined, informally used
–
–
v. To make two or more events happen at exactly the same
time or at the same rate
In Java, a synchronized block is accessed by only one thread
at a time


Always associated with an object
Controls access to shared state
Lecture 3: Parallel Architectures
volatile: Does it work?

See VolatileWorks.java
In Java, a volatile variable “is guaranteed to have
memory synchronized on each access”
–
–
Plus atomic reads and writes to long and double
All other built-in types are already atomic
Lecture 3: Parallel Architectures
volatile: Almost useless

In Java, a volatile variable “is guaranteed to have
memory synchronized on each access”
–
–

Plus atomic reads and writes to long and double
All other built-in types are already atomic
While underlying types are atomic, any operation
performed against them is not
–
–
Increment is not atomic!
Any reason to declare variables volatile?
Lecture 3: Parallel Architectures
Synchronized blocks

A Java synchronized block:
–
–
Has only one thread accessing the block at a time
Is reconciled with memory at start and end of block

–

Allows compiler not to write values back to memory during block
(unlike volatile)
Implemented with locking
Easy abstraction
–

See SynchronizedWorks.java
Locking and unlocking performed implicitly
Has performance/parallelism implications
–
A synchronized block is a single lane road that parallel threads
go through one at a time
Lecture 3: Parallel Architectures
Synchronized Danger!!

Synchronized only applies to an object (or class)
–

See SynchronizedBug.java
Frequent mistake is to apply to an object and assume it will
synchronize all objects of this class.
This type of error quite difficult to find and debug
Lecture 3: Parallel Architectures
java.util.concurrent.atomic
See AtomicIntegerWorks.java

Implement non-blocking concurrency constructs for
Java 32-bit types
–
–

Performance is HW dependent
–

Atomic increment
Compare and set
Use processor compare/swap or test/set instructions
Often too low level
–
Synchronization needs to occur on data structures, not
fundamental types
Lecture 3: Parallel Architectures
On Performance

Rank the performance of example programs using the
constructs:
–
–
–

AtomicInteger
Synchronized blocks
Volatile variables (which don’t work)
You may ask me questions about their implementation
Lecture 3: Parallel Architectures
On Performance

Volatile
–
–

Synchronized blocks
–
–

Every use of variable does a read, every modification does a
write
Prevents compiler from amortizing memory accesses over
multiple operations
Allow for operations in high level caches when threads are
scheduled. But not in this example.
Performance indicates that threads are able to reacquire
synchronized lock efficiently or lazily flush cache
AtomicInteger
–
Just slow
Lecture 3: Parallel Architectures
State checking

Scenario: thread continues if some condition holds
–
–
–


Message to be processed
Buffer fill
Item enqueued
Paradigm: check for condition and continue when true
Problem: check and execution must be atomic
–
–
Otherwise: state may change after check before action
E.g., another thread processed message you’re waiting on
Lecture 3: Parallel Architectures
wait() and the wait set

Check condition in a synchronized block:
–
–

If true, continue execution
If false, call wait()
Properties of wait()
–
–
releases the implicit lock (of the synchr. block)
threads having called wait() can’t be scheduled
Lecture 3: Parallel Architectures
notify()

Activate threads in wait set
–
–

Single: notify()
All: notifyAll()
Why does this need to be synchronized{}?
Lecture 3: Parallel Architectures
Comments about wait()/notify()

Implementation of wait set performs queuing
–




The while() loop ensures that the condition is checked
prior to the action
Waiting threads consume no (ok few) resources
wait() is blocking call
Must use a try clause
–

I’m assuming that it’s fair….could support priority
Can be interrupted by an exception
wait()/notify() is the simplest programming interface to
conditions
–
Used only in simple situations
Lecture 3: Parallel Architectures
Problems with wait()/notify()

Can’t check lock state before blocking
–
Can’t implement patterns such as:
if (condition)
do some work that requires condition
else
do some other work

Awkward to wait on multiple conditions
–
–
–
Must use notify all in this case
Awake all threads that check if other conditions are met
Negative performance consequences
Lecture 3: Parallel Architectures
ReentrantLock

Solves previous problems, with a little complexity
–
locks must be instantiated
–
locks must be unlocked explicitly


–
This is a source of errors
e.g., can implement a synchronized block with a lock
programmer must make sure that ALL code paths unlock

typically done in a finally clause
Lecture 3: Parallel Architectures
More on ReentrantLock

Has more interfaces
–
–
–
–

Check status of lock
Check on number of waiters
Find owner
Etc.
Can create condition variables associated with lock
–
–
–
Multiple conditions on the same lock
But, must have an instantiated lock
Different interface than wait()/notify()
Lecture 3: Parallel Architectures