to parallel processing.
Task : set of work/instruction to be performed
Process/thread : computing entity that performs
Critical section : subset of instructions that
can be executed by only one process at a time
ex : counter=counter+1
barrier : global synchronization point.
Many Scientific applications use
a large amount of computer resources. On a single processor computer this
can turn up to have applications running for months. There are three possibilities
to make up for this :
Why try to paralellize programs ?
This last solution seems to be
the easiest to use, but there is more to it than first seems.
Reduce the model used, but this is generally not
Buy a computer with a faster CPU, which does not
Try to have many CPUs at a time working on
There are two types of multi-processor
computers existing. Those with shared memory, (all processes share the
same address space and communicate through it) and those with distributed
memory (each process has its own memory space, disjoint from the others.
Information is shared through explicit message exchange).
Two ways to Paralellize a process :
The computer we used, an SGI
Origin 2000, uses shared memory.
In a code, a critical section
causes the program to wait for all tasks to be ended, before it can be
executed. This shows that while programing, if the code should be imported
on a parallel computer, the coder should be very careful.
Influence of the code :
At best a program can more
then half its execution time if the number of processors increases. One
can easily understand how it can be halfed, and cache memory effects can
also reduce the execution time. But it must be kept in mind that at worst
a program will only go slower if you allocate him more processors. I nothing
is paralisable, nothing is gained, and only time is lost by trying to use
More information on parallel computing here