CERE'03 logo The First International Workshop on Comparative Evaluation in Requirements Engineering
Monterey Bay, California, USA
September 8, 2003

Held in conjunction with the
11th IEEE International Conference on Requirements Engineering (RE03).
Home
Overview
Call for papers
Workshop format
Instructions for authors
Important dates
Program & Schedule
Program Committee
Venue
Registration

Resources

Latest news
Download this Call for Papers in a different format: as a plain text file as Postscript, A4 or letter size as PDF, A4 or letter size

Call for Papers


COMPARATIVE EVALUATION IN REQUIREMENTS ENGINEERING
(CERE'03)

http://www.di.unipi.it/CERE03

September 8, 2003 Monterey Bay, California, USA
co-located with RE'03
IEEE International Requirements Engineering Conference
http://www.re03.org

Overview

The need for an assessment of the progress made in RE research is becoming increasingly felt across the RE community. A number of requirements and specification exemplars have appeared along the years, e.g. the meeting scheduler, the London ambulance computer aided dispatch system, the light control system. These exemplars have been useful for illustrating new RE tools, techniques and methods, and for identifying potential lines of research. Unfortunately, the commonly used exemplars in RE all lack well-defined evaluation criteria, so different approaches cannot be compared directly. However, with the development of performance measures for these exemplars, it becomes possible to benchmark different RE technologies.

Although RE processes are extremely rich and varied, it is possible to identify areas that are sufficiently understood to allow the definition of benchmarks. The utility of such benchmarks for both research and industry has been clearly demonstrated by analogous efforts in other fields (e.g. TREC in text retrieval or Robot Soccer in robotics). By their very nature, successful benchmarks need a community effort to be defined and established. In seeking to define an agreed benchmark, research communities often experience a great leap forward, both in terms of collaboration and consensus among researchers, and in terms of technical results. This workshop seeks to spark a community initiative in this direction.

So, how should we assess progress in Requirements Engineering research? As a young, multi-disciplinary field, we still lack any broad consensus on appropriate research methodology and evaluation criteria. And yet we need to do comparative evaluation of our research efforts, if we wish to develop and mature as a scientific discipline. Such comparisons are also a crucial component of technology transfer. This workshop will examine the research methods we currently use in RE, and will investigate how we might improve our ability to evaluate and compare our research results.

Themes

  • Research method and research validation in RE:
    • How do we choose our research goals?
    • How do we evaluate success?
    • How do we measure the impact/importance of a research program?
    • Should we be more explicit about our research methods?
  • The role of comparative evaluation in RE:
    • Establishing the necessary consensus on how to compare research results
    • Strengths and weaknesses of various comparative evaluation approaches
    • Experience of these evaluation approaches in other fields
  • Determining which sub-areas in RE are ready for comparative evaluation.
    • Identifying task samples and evaluation criteria
    • Proposing potential benchmarks for specific RE activities.
  • Reporting on the results of empirical studies and comparative evaluation of RE techniques, methods and tools.

Format

CERE will be a discussion-oriented workshop to promote interaction and exchange of ideas among participants. In the first half of the workshop, there will be presentations and discussion of submitted papers. During the afternoon, there will be a plenary session to synthesize the ideas and examine the way forward for comparative evaluation. An informal committee will be formed to continue the work after the workshop. Results will be used to set up a TREC-style competition in 2004.

Submissions

Paper of two types can be submitted to the workshop, short position papers and full technical papers. Both types of papers should tackle one of the topics or questions from the Themes section above.

  • Position papers (max 4 pages)
    Short papers, stating the position of the author(s) on any of the topics within the scope of the workshop. For example, positions papers could describe experience with a particular research evaluation method, or could propose an area of RE that is ripe for benchmarking. Position papers will be evaluated based on their potential for generating discussion, and on the originality of the positions expressed.
  • Technical papers (max 8 pages)
    Full papers either describing experience of comparative evaluation, or report on the results of such evaluation. For example, a full paper might describe how a comparative evaluation of RE techniques was performed in practice, either by research labs or in industrial settings; or it may present the results of the actual performance of RE tools, methods or processes, in lab-based experiments or in field trials.

Important dates

Deadline for submissions of papersJune 27, 2003
Notice of acceptance to authorsJuly 18, 2003
Camera Ready Copy submissionAugust 15, 2003
Workshop dateSeptember 8, 2003

Organisers

Dr. Vincenzo Gervasi
Dipartimento di Informatica
University of Pisa
Email: gervasi@di.unipi.it
Web: http://www.di.unipi.it/~gervasi
Dr. Didar Zowghi
Faculty of Information Technology
University of Technology, Sydney
Email: didar.zowghi@uts.edu.au
Web: http://www-staff.it.uts.edu.au/~didar
Prof. Steve Easterbrook
Dept of Computer Science
University of Toronto
Email: sme@cs.toronto.edu
Web: http://www.cs.toronto.edu/~sme
Susan Elliott Sim
Dept of Computer Science
University of Toronto
Email: simsuz@cs.toronto.edu
Web: http://www.cs.toronto.edu/~simsuz

Program Committee

Annie I. Antón
Daniel M. Berry
Egon Börger
Dave Bustard
Daniela Damian
Steve Easterbrook
Khaled El Emam
Martin S. Feather
     Donald C. Gause
Vincenzo Gervasi
Constance Heitmeyer
Ann Hickey
Marina Jirotka
Barbara Kitchenham
Michel Lemoine
Neil Maiden
     Bashar Nuseibeh
Klaus Pohl
Bjorn Regnell
Colette Rolland
Susan Sim
Roel Wieringa
Didar Zowghi