Invited Talks


Dennis Gannon - Dept. Computer Science - Indiana University

Title:Building Grid Applications and Portals: An Approach Based on Components, Web Services and Workflow Tools.
Date: 1 Sept.

Abstract

Large scale Grid applications are often composed a distributed collection of parallel simulation codes, instrument monitors, data miners, rendering and visualization tools. For example, consider a severe storm prediction system driven by a grid of weather sensors. Typically these applications are very complex to build, so users interact with them through a Grid portal front end. This talk outlines an approach based on a web service component architecture for building these applications and portal interfaces. We illustrate how the traditional parallel application can be wrapped by a web service factory and integrated into complex workflows. Additional issues that are addressed include: grid security, web service tools and workflow composition tools. The talk will try to outline several important classes of unsolved problems and possible new research directions for building grid applications.

Downloads:


Manuel Hermenegildo

Title: Some Techniques for Automated, Resource-Aware Distributed and Mobile Computing in a Multi-Paradigm Programming System
Date: 3 Sept.

Abstract

Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to different receiving nodes in a high-bandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves sufficient computational cost when compared to the task creation and communication costs and other such practical overheads. On the receiving side, an important issue is to have some assurance of the correctness and characteristics of the code received and also of the kind of load the particular task is going to pose, which can be specified by means of certificates. In this paper we present in a tutorial way a number of general solutions to these problems, and illustrate them through their implementation in the Ciao multi-paradigm language and program development environment. This system includes facilities for parallel and distributed execution, an assertion language for specifying complex programs properties (including safety and resource-related properties), and compile-time and run-time tools for performing automated parallelization and resource control, as well as certification of programs with resource consumption assurances and efficient checking of such certificates.


Mateo Valero - DAC-UPC - Barcelona

Title:Kilo-instruction Processors
Date: 1 Sept

Abstract

A promising approach for dealing with very long latency memory accesses (cache misses to main memory) is to dramatically increase the number of in-flight instructions in an out-of-order processor. Current processors commit instructions in program order. Consequently, a huge quantity of resources are needed to maintain thousands of instructions in flight. We need to do research in new techniques oriented to better use of resources. We observe that many inefficiencies can be eliminated if we change the model of in order commit of instructions. We need to design processors that support some form of out-of-order commit of instructions. But, of course, we also need to maintain precise exceptions.
To implement out-of-order instruction commit, we propose checkpointing a few very specific instructions with the objective of reducing and managing all the critical resources in the architecture such as ROB, Register File and Instruction Queues. We apply checkpointing, for example, to long-latency load instructions and/or hard-to-predict branch instructions. This mechanism of checkpointing: In this talk, we will comment papers describing the previous mechanisms and we will open new topics for research.

Downloads:


Murray Cole - School of Informatics - Edinburgh UK

Title: Why structured parallel programming matters
Date: 3 Sept 2004

Abstract

Simple parallel programming frameworks such as Pthreads, or the six function core of MPI, are universal in the sense that they support the expression of arbitrarily complex patterns of computation and interaction between concurrent activities. Pragmatically, their descriptive power is constrained only by the programmer's creativity and capacity for attention to detail. Meanwhile, as our understanding of the structure of parallel algorithms develops, it has become clear that many parallel applications can be characterized and classified by their adherence to one or more of a number of generic patterns. For example, many diverse applications share the underlying control and data flow of the pipeline paradigm, whether expressed in terms of message passing, or by constrained access to shared data. A number of research programs, using terms such as skeleton, template, archetype and pattern, have sought to exploit this phenomenon by allowing the programmer to explicitly express such meta-knowledge in the program source, through the use of new libraries, annotations and control constructs, rather than leaving it implicit in the interplay of more primitive universal mechanisms. While early work stressed productivity and portability (the programmer is no longer required to repeatedly ``reinvent the wheel'') we argue that the true significance of this approach lies in the capture of complex algorithmic knowledge which would be impossible to determine by static examination of an equivalent unstructured source. This enables developments in a number of areas. With respect to low-level performance, it allows the run-time system, library code or compiler to make clever optimizations based on detailed foreknowledge of the evolving computation. With respect to high-level performance, it enables a methodology of improvement through powerful restructuring transformations. Similarly, with respect to program correctness, it allows arguments to be pursued at a much coarser, more tractable grain than would otherwise be possible.

Downloads:


Last modified: Fri Nov 28 17:00:22 CET 2003