Transcript poster
Independent Study of Parallel Programming Languages
An Independent Study By: Haris Ribic, Computer Science - Theoretical
Independent Study Advisor: Professor Daniel Bennett
WHY THIS STUDY?
“In 2007, the degree of parallelism for personal computing in current desktop systems
such as Linux and Windows Vista is nil, which either indicates the impossibility of the
task or the inadequacy of our creativity.” – Gordon Bell in Communications of the ACM
Everybody has seen these advertisements
but what do they mean?
MPI-Message Passing Interface
UPC-Unified Parallel C
OpenMP
Standard based library that allows
many computers to communicate
with one another.
An extension of the C
programming language designed
for high-performance computing
on large-scale parallel machines
An application programming
interface that supports multiplatform shared memory and
multiprocessing programming.
... moreover we have seen an increase in
DUAL-CORE processors but still what does
it all mean?
Simple schematics of a
DUAL-CORE processor
build by Intel and AMD.
Charm++
Preliminary results using Monte Carlo Method for calculating Pi (π) using MPI and OpenMP
languages on a Computer Cluster and Symmetric Multiprocessing Computer.
2
Simple schematics of a
computer cluster like the
one used by Computer
Science Department.
0
Monte Carlo Method
- Generate random numbers x and y ranging from 0 to 1
- Count the number of hits inside the circle by x^2 + y^2 < 1
- Probability (Hit) = Surface Circle/Surface Square
- Π = number of hits in circle / total hits
- Generating more numbers increases accuracy of the number π
- However more numbers slow down the computer
2
MPI
OpenMP
MPI
OpenMP
120000
100000
9000
8000
7000
Process 1
6000
5000
4000
3000
2000
1000
0
Program
Process 2
…
Process K
5E+07
8E+07 1E+08
2E+08 2E+08
3E+08
5E+08 8E+08
Sequential
- Ability to parallelize programs
- CPU uses less energy and
delivers more performance
- Better system responsiveness
and multi-tasking capability
- Ability to parallelize programs
- More cost-effective supercomputer
- Used in scientific research
“For example you could have your Internet browser
open along with a virus scanner running in the
background, while using Media Player to stream your
favorite radio station and the dual-core processor will
handle the multiple tasks without the decrease of
performance and efficiency.” www.intel.com/core2duo
A computer cluster is a group of loosely coupled
computers that work together closely so that in
many respects they can be viewed as though
hey are a single computer. The components of a
cluster are commonly connected to each other
through fast local area networks.
60000
40000
20000
5000000
1E+09
2 PC's
4 PC's
10000000
25000000
50000000
100000000
1000000000
Number of Trials
Number of Trials
Why Computer Cluster?
80000
0
3E+07
Why DUAL-CORE?
Timing
Timing
12000
11000
10000
System BUS
Parallel object-oriented
programming language based on
C++.
Sequential
6 PC's
- Efficiency increases when using more nodes
- Language independent, could use C++, FORTRAN
- Difficult to use
Master
Node
Nodes
1
0
2
…
K
MPI_Init(&argc, &argv)
MPI_Comm_rank(MPI_COMM_WORLD, &my_rank)
MPI_Comm_size(MPI_COMM_WORLD, &p)
MPI_Barrier(MPI_COMM_WORLD)
Master
Node
0
MPI_Bcast(&totalDarts, 1, MPI_INT, src, MPI_COMM_WORLD)
Each node performs calculation
MPI_Gather(&time, 1, MPI_INT, aryTime, 1, MPI_INT, src, MPI_COMM_WORLD)
Thread
Thread
MPI_Reduce(&hits, &allHits, 1, MPI_INT, MPI_SUM, src, MPI_COMM_WORLD)
Print results
CPU 1
quad
- Easy to write
- Depends of operating system scheduling
#pragma omp parallel private(i, trdID) shared(rndAry, hits, darts)
trdID = omp_get_thread_num()
srand(rndAry[trdID])
#pragma omp for reduction(+:hits)
for (i = 0; i < darts; i++)
x_value = drand48()
y_value = drand48()
if ( ((x_value*x_value)+(y_value*y_value)) <= 1 )
hits = hits + 1
Process
Get number of darts
single HT
CPU 2