Next: Hierarchical Parallel Computation Models
Up: Overall structure: Hierarchy-Aware techniques
Previous: Topics of the chapters
The topics can be classified according to different criteria.
In a general parallel architecture we find several memory layers to
which we can apply HM techniques
(1) processor cache,
(2) local memory (physically shared),
(3) remote memory (either virtually shared, or accessed by message passing),
(4) local disk device(s), and
(5) remote disks.
There are other distinctions we can make
- From the point of view of parallelism expression and related
overheads, there is a trade-off between threads and processes, and the
use of shared memory versus that of explicit message passing. The two
approaches are mixed when working with SMP Clusters.
- Are the access methods and caching policies fixed by the
hardware, or are they user-definable?
- Some tools allow the user to define parallel HM computation,
other tools implicitly exploit parallelism to improve HM
access (PFS).
We have agreed on a first organization of the matter, which has turned
into a set of 4 chapters
- SMP cluster techniques -- (Martin)
- hierarchical parallel computation and libraries to exploit it
-- ``Hierarchical computation models and software tools'' (Massimo)
- file systems -- ``File Systems: state of the art'' (Florin)
- storage managing techniques -- ``Algorithmic and implementation
of storage networks'' (Kay)
Within this general scheme, I outline my contribute in the following section.
Next: Hierarchical Parallel Computation Models
Up: Overall structure: Hierarchy-Aware techniques
Previous: Topics of the chapters
Massimo Coppola
2002-02-08