Transcript lecture1

CS4402 – Parallel Computing
Lecture 1:
Classification of Parallel Computers
Classification of Parallel Computation
Important Laws of Parallel Compuation
How I used to make
breakfast……….
How to set family to
work...
How finally got to the
office in time….
What is Parallel Computing?
In the simplest sense, parallel computing is the simultaneous use of
multiple computing resources to solve a problem.
Parallel computing is the solution for "Grand Challenge Problems“:
weather and climate
biological, human genome
chemical and nuclear reactions
Parallel Computing is a necessity for some commercial applications:
parallel databases, data mining
computer-aided diagnosis in medicine
Ultimately, parallel computing is an attempt to minimize time.
Grand Challenges Problems
List of Supercomputers
Find this information at
http://www.top500.org/
Reason 1: Speedup
Reason 2: Economy
Resources already available.
Taking advantage of non-local resources
Cost savings - using multiple "cheap" computing resources instead of
paying for time on a supercomputer.
A parallel system is cheaper than a better processor.
Transmission speeds.
Limits to miniaturization.
Economic limitations.
Reason 3: Scalability
13
Types of || Computers
Parallel Computers
Hardware
Shared
memory
Distributed
memory
Software
Hybrid
memory
SIMD
MIMD
14
17
The Banking Analogy
 Tellers: Parallel Processors
 Customers: tasks
 Transactions: operations
 Accounts: data
Vector/Array
 Each teller/processor
gets a very fine-grained
task
 Use pipeline parallelism
 Good for handling
batches when
operations can be
broken down into finegrained stages
SIMD (Single-InstructionMultiple-Data)
 All processors do the
same things or idle
 Phase 1: data
partitioning and
distributed
 Phase 2: data-parallel
processing
 Efficient for big, regular
data-sets
Systolic Array
 Combination of SIMD and
Pipeline parallelism
 2-d array of processors with
memory at the boundary
 Tighter coordination
between processors
 Achieve very high speeds by
circulating data among
processors before returning
to memory
MIMD(Multi-InstructionMultiple-Data)
Each processor (teller)
operates independently
Need synchronization
mechanism
by message passing
or mutual exclusion
(locks)
Best suited for largegrained problems
Less than data-flow
parallelism
Important Laws of || Computing.
28
29
30
31
32
33
34
35
Important Consequences
n
S ( n) 
1  (n  1)  f
 f=0 when no serial part  S(n)=n perfect speedup.
 f=1 when everything is serial  S(n)=1 no parallel code.
36
Important Consequences
n
S ( n) 
1  (n  1)  f
 S(n) is increasing when n is increasing
 S(n) is decreasing when f is increasing.
37
Important Consequences
n
1
S ( n) 

1  (n  1)  f
f
no matter how many processors are being used the
speedup cannot increase above
Examples:
f = 5%  S(n) < 20
f = 10%  S(n) < 10
f = 20%  S(n) < 5.
38
39
40
41
Gustafson’s Law - More
42
Gustafson’s Speed-up
Sequential Time s  T  n  p  T
S ( n) 

 s  n p
Parallel Time
T
When s+p=1
S (n)  s  n  (1  s)  n  (1  n) s
Important Consequences:
1) S(n) is increasing when n is increasing
2) S(n) is decreasing when n is increasing
3) There is no upper bound for the speedup.
43
To read:
1.
John L. Gustafson, Re-evaluating Amdahl's Law,
http://www.scl.ameslab.gov/Publications/Gus/AmdahlsLaw/Amdahls.html
2.
Yuan Shi, Re-evaluating Amdahl's and Gustafson’s Laws,
http://www.cis.temple.edu/~shi/docs/amdahl/amdahl.html
3.
Wilkinson’s book,
1.
sections of the laws of parallel computing
2.
sections about types of parallel machines and compuation
44