Special issue of the IEEE Transactions on

Knowledge and Data Engineering



 
Connectionist Models for Learning in Structured Domains
Guest-Editors: Paolo Frasconi, Marco Gori and Alessandro Sperduti


Submission deadline: July 30, 1999

 
BACKGROUND

Structured representations are ubiquitous in different fields such as knowledge representation, language modeling, and pattern recognition. Although many of the most successful connectionist models are designed for "flat" (vector-based) or sequential representations, recursive or nested representations should be preferred in several situations. One obvious setting is concept learning when objects in the instance space are graphs or can be conveniently represented as graphs. Terms in first-order logic, blocks in document processing, patterns in structural and syntactic pattern recognition, chemical compounds, proteins in molecular biology, and even world wide web sites, are all entities which are best represented as graphical structures, and they cannot be easily dealt with vector-based architectures. In other cases (e.g., language processing) the process underlying data has a (hidden) recursive nature but only a flat representation is left as an observation. Still, the architecture should be able to deal with recursive representations in order to model correctly the mechanism that generated the observations. The interest in developing connectionist architectures capable of dealing with these rich representations can be traced back to the end of the 80's. Early approaches include Touretzky's BoltzCONS, the Pollack's RAAM model, Hinton's recursive distributed representations. More recent techniques include labeled RAAMs, holographic reduced representations, and recursive neural networks. Today, after more than ten years since the explosion of interest in connectionism, research in architectures and algorithms for learning structured representations still has a lot to explore and no definitive answers have emerged. It seems that the major difficulty with connectionist models is not just representing symbols, but rather devising proper ways of learning when examples are data structures, i.e. labeled graphs that can be used for describing relationships among symbols (or, more in general, combinations of symbols and continuously-valued attributes).

 

TOPICS
 
The aim of this special issue is to solicit and publish valuable papers that bring a clear picture of the state of the art in this area. We encourage submissions of papers addressing, in addition to other relevant issues, the following topics:
 
 
INSTRUCTIONS
 
We encourage e-mail submissions (Postscript, RTF, and PDF are the only acceptable formats). For hard copy submission please send 6 copies of the manuscript to Prof. Marco Gori. Manuscripts should not exceed 30 pages double spaced (excluding Figures and Tables). The title and the abstract should be sent separately in ASCII format, even before the final submission, so that reviewers can be contacted timely.
 
IMPORTANT DATES
 
 
 
  • Submission of title and abstract (e-mail):
  • July 15, 1999
  • Submission deadline: 
  • July 30, 1999
  • Notification of acceptance: 
  • December 31, 1999
  • Expected publication date:
  • Mid-to-late 2000
     
    ADDRESSES
     
    Paolo Frasconi
    DIEE, University of Cagliari Piazza d'Armi 09123 Cagliari (ITALY)
    Phone: +39 0339 5399 648
    E-mail: paolo@diee.unica.it

    Marco Gori
    DII, University of Siena Via Roma 56, 53100 Siena (ITALY)
    Phone: +39 577 263 610
    E-mail: marco@ing.unisi.it

    Alessandro Sperduti
    DI, University of Pisa Corso Italia 40, 56125 Pisa (ITALY)
    Phone: +39 50 887 213
    E-mail: perso@di.unipi.it