CERE'03 logo The Second International Workshop on Comparative Evaluation in Requirements Engineering
Kyoto, Japan
September 7, 2004

Held in conjunction with the
12th IEEE International Conference on Requirements Engineering (RE'04).
Call for papers
Workshop format
Instructions for authors
Important dates
Program & Schedule
Program Committee


Latest news

Workshop Schedule

Tuesday September 7, 2004

9:00-9:15Welcome and introduction
9:15-10:30Keynote talk by Donald C. Gause
10:30-10:45Coffee break
10:45-11:45 Technical Papers - Session I
A New Paradigm for Planning and Evaluating Requirements Engineering Research by Alan M. Davis and Ann M. Hickey
Research Methods in Requirements Engineering by Alistair Sutcliffe
Discussant reply
Open discussion
11:45-12:40 Technical Papers - Session II
Evaluating the structure of research papers: A case study by Roel Wieringa and Hans Heerkens
Requirements Metrics: Scaling Up by Kimberly S. Wasson
Discussant reply
Open discussion
14:00-14:15Organization of break-out groups
14:15-16:15Break-out group meetings
16:15-17:00Group reports

Workshop Program

Keynote Speaker

Prof. Donald C. Gause, Binghamton University

Is it too much too soon, too little too late, or just the right amount at just the right time?

See the abstract of the talk.

Technical Program

The following papers have been selected for presentation at the workshop:

  • Evaluating the structure of research papers: A case study
    Roel Wieringa and Hans Heerkens
    This paper is triggered by a concern for the methodological soundness of research papers in RE. We propose a number of criteria for methodological soundness, and apply these to a random sample of 37 submissions to the RE'03 conference. From this application, we draw a number of conclusions that we claim are valid for a larger sample than just these 37 submissions. Our major observation is that most submissions in our sample are solution-oriented: they present a solution and illustrate it with a problem, rather than search for a solution to a given problem class; and most papers do not analyze why and when a solution works or does not work. We end with discussion of the need to improve the methodological soundness of research papers in RE.
  • Research Methods in Requirements Engineering
    Alistair Sutcliffe
    Research methods and approaches to validating research results from RE and tow related disciplines- Human Computer Interaction and Information Systems are compared. The potential lessons that RE might take from HCI and IS are reviewed.
  • A New Paradigm for Planning and Evaluating Requirements Engineering Research
    Alan M. Davis and Ann M. Hickey
    Due to the existence of the US Federal Drug Administration (FDA), new drugs are not made available for widespread use until their effectiveness, risk, and limitations are thoroughly understood as the result of rigorous evaluation of research and clinical trials. On the other hand, hundreds of new requirements engineering (RE) research results are produced every year, and made available for public use with little to no data concerning their effectiveness, risk and limitations. It should not be surprising therefore that most of these research results are totally ignored by the user community. This paper proposes the adoption of many of the FDA practices to the RE world. If adopted, these prac-tices will enable the comparison and evaluation of RE research results, and thus increase the successful technology transfer of some of these results to practice.
  • Requirements Metrics: Scaling Up
    Kimberly S. Wasson
    Establishing the relative value of results within a field of study contributes to advancement of that field. In order to compare across large numbers of results together, the methods and metrics used must be scaled up from existing studies where the number of subjects or cases is small. This scaling provides specific challenges, for example, metrics used in small studies are often shaped by factors of the local environment. Standardization is required in order to enable aggregation of data across multiple distributed environments. Further, standardization of metrics is both non-trivial and insufficient. Complex linguistic factors must be accounted for in order to maximize consistency of metric interpretation and use, and a number of other issues must be addressed to ensure that the metrics are interesting as well as practical. This position paper elaborates these issues, sets forth criteria for benchmark-friendly metrics, and proposes a community activity designed to establish a foundational set of requirements benchmark metrics.