next up previous
Next: Message-Passing Programming (Focus: MPI) Up: Parallel Hierarchical Architectures Previous: Models and Methodologies for

Programming of Clusters of Multiprocessors

Looking at parallel programming paradigms, we can expose two extrems. First, there is the message passing paradigm. Each processor has its local memory and communicates with other processors by exchanging messages. This paradigm represents a "shared nothing" approach. Second, there is the shared-memory programming paradigm. Here all the data can be accessed by all processors. The programmer has to formulate race-conditions to avoid deadlocks or inconsistencies. This is the "shared-everything" approach. Looking at clusters of multiprocessors, it is possible to use both paradigms. But it is obvious that without optimization it is not possible to receive peak performance, because both do not consider the existing parallel hierarchy. Despite of that, there are developments which try to adapt standard message passing libraries and distributed shared-memory systems to clusters of multiprocessors. Recent development in this area was made in order to receive a mixed-mode programming model. The idea behind that, is to use shared-memory programming inside the nodes and to use message passing between the nodes. This is a very promising approach, because it fits to the architecture. In the following there are three subsections about programming of clusters of multiprocessors. The first subsection is about using message-passing as programming paradigm. It will report which optimizations were made to the standard message-passing library MPI [24][25], in order to improve performance on clusters of multiprocessors. Subsection two is about shared-memory programming. The standard for programming shared-memory systems is openMP [26]. In the subsection there is a summary on how openMP is adapted to clusters of multiprocessors. The third subsection is about mixed-mode programming. Most work describes the usage of MPI and openMP and compares the resulting performance with the performance ordinary MPI-programs.



Subsections
next up previous
Next: Message-Passing Programming (Focus: MPI) Up: Parallel Hierarchical Architectures Previous: Models and Methodologies for
Massimo Coppola 2002-02-08