Introduction to parallel processing.

 

Vocabulary :

Task : set of work/instruction to be performed
Process/thread : computing entity that performs a task
Critical section : subset of instructions that can be executed by only one process at a time
                ex : counter=counter+1
barrier : global synchronization point.
 

Why try to paralellize programs ?

    Many Scientific applications use a large amount of computer resources. On a single processor computer this can turn up to have applications running for months. There are three possibilities to make up for this :     This last solution seems to be the easiest to use, but there is more to it than first seems.
 

Two ways to Paralellize a process :

    There are two types of multi-processor computers existing. Those with shared memory, (all processes share the same address space and communicate through it) and those with distributed memory (each process has its own memory space, disjoint from the others. Information is shared through explicit message exchange).
    The computer we used, an SGI Origin 2000, uses shared memory.
 

Influence of the code :

    In a code, a critical section causes the program to wait for all tasks to be ended, before it can be executed. This shows that while programing, if the code should be imported on a parallel computer, the coder should be very careful.
    At best a program can more then half its execution time if the number of processors increases. One can easily understand how it can be halfed, and cache memory effects can also reduce the execution time. But it must be kept in mind that at worst a program will only go slower if you allocate him more processors. I nothing is paralisable, nothing is gained, and only time is lost by trying to use many processors.
 

More information on parallel computing here
 
Back to index 
Back to report index