CERE'03 logo The Second International Workshop on Comparative Evaluation in Requirements Engineering
Kyoto, Japan
September 7, 2004

Held in conjunction with the
12th IEEE International Conference on Requirements Engineering (RE'04).
Home
Overview
Call for papers
Workshop format
Instructions for authors
Important dates
Program & Schedule
Program Committee
Venue
Registration

Resources

Latest news

Overview

The need for an assessment of the progress made in RE research is becoming increasingly felt across the RE community. A number of requirements and specification exemplars have appeared along the years (e.g., the meeting scheduler, the London ambulance computer aided dispatch system, the light control system). These exemplars have been useful for illustrating new RE tools, techniques and methods, and for identifying potential lines of research. However, the commonly used exemplars in RE all lack well-defined evaluation criteria, thus making comparison of the effectiveness of the different approaches impossible.

Some of the more mature methods and tools in RE have been subjected to pilot studies in real organizations. While these provide a good indicator of the utility and effectiveness of such methods and tools, they tend to focus on improvements to the technique under study, rather than providing any basis for comparison with competing techniques.

There are now many signs that research in RE is becoming mature enough that the community can begin to make detailed comparative evaluations of alternative techniques. For example, although RE processes are extremely rich and varied, it is possible to identify areas that are sufficiently understood to allow the definition of benchmarks. The utility of such benchmarks for both research and industry has been clearly demonstrated by analogous efforts in other fields, e.g., the TREC competition in text recognition or RoboCup (robot soccer) in robotics. By their very nature, successful benchmarks need a community effort to be defined and established. In seeking to define an agreed benchmark, research communities often experience a great leap forward, both in terms of collaboration and consensus among researchers, and in terms of technical results. This workshop seeks to spark a community initiative in this direction.