Pisa - Dipartimento di Informatica - Research Evaluation Exercise 1999


Web Computing
and
Adaptive Agents



Proposer: Giuseppe Attardi

  1. Participants
  2. Giuseppe Attardi, 100%

    Maria Simi, 100%

  3. Collaborators
    1. Students
    2. Antonio Cisternino, undergraduate

      Massimo Di Giorgio, undergraduate

      Barbara Centini, undergraduate

      Giovanni Zorzetti, undergraduate

      Alessandro Tommasi, undergraduate

      Filippo Tanganelli, undergraduate

    3. External contacts

    Antonio Gullí, Ideare srl

    Domenico Dato, Ideare srl

    Tito Flagella, Link srl

    Antonio Converti, Italia On Line

    Carlo Traverso, Dipartimento di Matematica

  4. Keywords: web computing, adaptive agents, text categorization, knowledge management, language analysis
  5.  

     

    Web Computing and Adaptive Agents

    Giuseppe Attardi

  6. State of the Art and Trends
  7. Developing the sophisticated computer systems of the future will require suitable infrastructures (Web computing) and technologies (adaptive agents).

    1. Web Computing
    2. The Web is primarily a document transfer system, which has been later extended to support limited stateless client/server communication via mechanisms such as CGI server extensions and HTTP POSTs. As Web applications that use this channel grow more complicated, this architecture is reaching design limitations. The Web should evolve from an infrastructure for accessing static pages to an architecture supporting Web objects that interact to each other. Web computing must be supported by a suitable Web object model, less complicated than object models like COM and CORBA, based on Web native standards such as HTTP/XML and architecture and language neutral. Java has most of these features except the last one. Such unified distributed computing model for the Web will encompass both document publishing and distributed object communication.

      Work in this direction is being pursued by projects WebBroker and Infosphere. WebBroker is developing an XML based mechanism for distributed object communication on the Web. Infosphere is concerned with the theory and implementation of compositional systems that support peer-to-peer communication among persistent multithreaded distributed objects.

      Infosphere brings objects to the Web, which are automatic, self-aware, and intelligent.

      Even though tied to Java, Jini is an interesting connection technology: it consists of an infrastructure and a programming model that enable devices to connect with each other forming an impromptu community. Jini technology uses Java RMI protocols to move code around the network. Devices and applications use a process known as discovery to register with the network. Once registered, the device or application places itself in the lookup service, equivalent to a bulletin board for all services on the network.

       

      The goal of our research on Web computing is to design, implement and experiment with new Web computing architectures, in particular for building adaptive systems.

    3. Adaptive Agents

     

    Traditional computer science is based on logic formalisms. This implies that computer systems must deal with well-structured or coherently arranged information (e.g. databases, consistent logic theories, discrete or separable signals). Research in knowledge representation has shown that the coherence of information is often impossible: researchers have discussed inconclusively for years how to deal with anomalies and exceptions by means of logic formalisms like non-monotonic reasoning, default logic or circumscription. Natural languages exhibit a flexibility of usage which is elusive to the goal of casting concepts used to express knowledge into a fixed universal structure: words are used in a much more varied and expressive way to represent concepts than a logic theory can capture. Probably there is no simple notion of meaning for words that can be captured by predicates or set theory. In his explorations on the origin of language, Luc Steels postulates that the meaning of a word is related to the word ability to discriminate between objects rather than being an association with an abstract notion. For instance the word "tree" is useful to discriminate either between different types of plants or types of diagrams, rather then to represent an abstract and elusive notion of "treeness".

    Moreover, in many situations the patterns of information that are to be dealt by computers are so complex that a logic-based approach is no longer appropriate. Providing computer agents with intelligent abilities requires building programs or knowledge bases so complex that are impossible to handcraft.

    On the other hand large collections of training data have become available as well as the computing power to process them.

    A challenge for research lies in building intelligent systems by combining knowledge acquired by training with prior knowledge from established bodies.

     

    An adaptive agent uses information extracted from interacting with its environment to improve its behavior.

    Adaptive agents can play a role in achieving the vision of the computer of the future as an intelligent assistant. This vision requires going beyond the current desktop metaphor of interacting with a computer. This metaphor has become pervasive and has reached a high degree of sophistication enabling to perform a large variety of tasks. Unfortunately a GUI based on direct manipulation (point-and-click interactions) intrinsically limits user actions to precise and simple clerical tasks.

    Introducing other modalities of interaction like speech input or gesture recognition is unlikely to bring significant improvements: issuing commands by voice to open menus or click on items does not reduce the time required to perform a task: in general it makes the task more cumbersome and slow due to language and context ambiguity and to the fact that often natural language descriptions are more verbose than pointing directly to what one wants. This is a consequence of the fact that the user interface is visual, therefore objects are displayed in a way that are simple for the eye to recognize.

    The advent of small handheld devices without keyboard or pointing devices poses new challenges to the desktop metaphor. The current approaches are some form of handwriting recognition or speech recognition. Speech recognition is not widely used, despite considerable improvements in recognition technologies, which are now capable of good recognition rates even for continuous speech. Despite this, user experiments have measured that the time for performing a task by voice commands is quite slower than by other means, even for tasks like dictation. While improvements in speech recognition might make dictation competitive with typing, voice input will not be helpful for other tasks unless the presentation and interaction metaphor will change significantly.

    Voice input performs better in restricted contexts: for instance prototypes have been built of voice input on handheld devices, where the number of possible tasks is limited and only one task at a time is presented to the user attention. One can say: "reply to John", "meeting on Monday at noon". The first sentence activates the mail composer, fills the recipient with the address for John, extracted from the address book. The second sentence fills the subject field of the message and opens the calendar at the proper date and time, and adds the meeting with John.

     

    There is a more impelling reason to look for ways to go beyond the desktop metaphor.

    The adoption of computer tools based on this metaphor has turned many users into clerks, transferring to them the tasks previously performed by clerks. This transfer of tasks has eliminated intermediate level personnel but has required acquisition of their skills from people devoted to other activities: for instance people have become expert at typesetting, drawing, software installation, accounting. In some cases this shift has happened within a company, with apparent benefits in term of personnel count, but sometimes also across companies. For instance, remote banking forces customers to undertake the tasks of office clerks, in practice becoming themselves employees of the bank. Customers who have to perform bank transfers need to figure out which form to fill, which bank codes are needed and so on. We entertain a higher level of interaction with an investment broker, to whom we set goals and preferences, rather than directly performing stock purchases, bond swaps or other direct financial operations.

    The most intuitive way to raise the level of interaction is by exploiting natural language. While full language understanding is still out of reach, simple forms of text analysis are already been applied successfully, for instance in data mining, document retrieval and categorization, knowledge management.

    Statistical tools are currently in wide used in linguistics and information retrieval. They are simple to build, efficient and can achieve a good degree of accuracy (80-90%). It seems however quite difficult to improve their accuracy simply by more sophisticated statistical analysis: many attempts in this direction show improvements at best of a few percent, if at all. Deeper semantic analysis seems required to improve the quality of language analysis tools.

     

    The goal of our research on adaptive agents is to develop models, software technology, libraries, knowledge bases, linguistic corpora and tools, presentation metaphors, learning algorithms to produce agents that will provide a higher level of interaction with people.

  8. Relevant Activities at the Department
    1. European projects
    1. Funding
    1. Equipment grants
    1. Network Computing

In 1986, within ESPRIT project CHAMALEON, we developed an architecture for software migration. The architecture was based on a virtual machine called ACM (Abstract Common Machine) which included mechanisms for migrating code across the network, support for objects including method dispatch, serialization and reflection, garbage collection, multithreading, interfaces with the graphics and system facilities of the host machine.

In 1994 these same ideas formed the basis for Java. Besides these similarities, in the original implementation of Java, the development team at Sun Microsystems used CMM, a customizable conservative garbage collector for C++ that we developed in project PoSSo.

 

Once the technology had been established, in the last few years we have been exploring its capabilities and limitations in building several applications:

  1. Mod740 is an applet that helps filling the form for the Italian Internal Revenue Service. This application stresses the capabilities of Java and its implementations in various browsers, in order to prove the viability of the technology in building complex applications.
  2. The current incarnation, Unico99, exploits the latest security mechanisms of Java so that the applet can perform the full task of preparation and submission, including certifying the applet itself and allowing the user to digitally sign the form before submitting it via network.
  3. CompAss is a tool that assists students in preparing their plan of study. Both its graphical interface and the code for checking compliance of the plan with faculty regulations is automatically generated from a compiler of a suitable constraint language. CompAss is currently in official use at the University of Pisa.
      1. Network Design

Attardi is involved in the organization, design and evolution of the national research network GARR as a member of the ministry committee OTS GARR and of the national university committee NTGC. He is also involved in the direction of SeRRA, the center for networking of the University of Pisa.

    1. Distributed Object Models and Software Components
      1. Unified distributed object model
      2. A software component is piece of program that can be customized at design time and embedded into an application complying with an interface framework.

        Software components are often built according to an object model. CORBA and DCOM are two competing distributed object models. Both have drawbacks: DCOM does not support inheritance and is mostly available on Microsoft platforms; CORBA is oriented towards interfacing with distributed services rather than objects (it has no standard notion of class instantiation), and does not provide support for software components.

        Luckily, both CORBA and DCOM use IDL for expressing interfaces in a language independent way.

        We have developed a unified object model, which fully supports objects with inheritance, and which can be translated both into DCOM and CORBA. The translation into DCOM adds the annotations required to create ActiveX components.

      3. Adding templates to CORBA IDL

      IDL does not provide parametric types, which are essential in many situations, in particular in computer algebra. We have designed an extension to IDL with templates and their standard translation into C and C++. Our IDL templates are expressive but they are not Turing complete, as is the case with C++ templates, in order to ensure that compilation time is not unbounded.

    2. Categorization of Web documents

Assistance in retrieving of documents on the World Wide Web is provided either by search engines, through keyword-based queries, or by catalogues, which organize documents into hierarchical collections. Maintaining catalogues manually is becoming increasingly difficult due to the sheer amount of material on the Web, and therefore it is necessary to resort to techniques for automatic classification of documents. Classification is traditionally performed by extracting information for indexing a document from the document itself. Categorization by context is a novel technique for automatic categorization based on the following hypotheses:

  1. a Web page which refers to a document must contain enough hints about its content to induce someone to read it
  2. such hints are sufficient to classify the document.

Within project EUROsearch we developed Theseus, an automated classifier, which infers the context from the structure of HTML documents and performs categorization.

We also developed SearchTone, which performs categorization by content and is already in production use within Arianna, the largest search engine for the Italian Web space.

Within these tools we have already applied techniques of linguistic analysis. We improved an existing statistical part of speech (POS) tagger TreeTagger, revising its memory management and building an Italian lexicon. The POS tagger plays a fundamental role in detecting noun phrases, identifying the lemmas of words, determining common or stop words so that the amount of text that needs to be analyzed for categorization is significantly reduced.

  1. Short Term Plans and Expected Results
  2. Filing and retrieving documents is a task where a higher level of interaction through adaptive agents can be explored: users do not need to know how and where documents are stored as long as they can retrieve them easily and quickly when needed.

    In particular we plan to continue working on search and categorization of documents, refining the techniques with the addition of semantic linguistic analysis.

    1. Categorization

Categorization is a fundamental operation in many aspects of human activity: from low level activities like signal analysis, to speech recognition, handwriting recognition, to higher level tasks of information filtering, knowledge representation and reasoning.

We will explore techniques for categorization in the area of Web documents.

Among the aspect to be studied are:

    1. Concept Learning

In our research we studied various issues of knowledge representation: description logics, taxonomic reasoning, contextual reasoning. Conceptual taxonomies play an important role in organizing knowledge and supporting reasoning; viewpoints (or contexts) are the basis for representing the knowledge of several agents and their interaction.

 

Formalizing the communication between agents by means of viewpoints led us to formulate the principle of referent sharing in communication: agents can communicate with each other using phrases that refer only to manifest constants. A manifest constant refers to some object that has been pointed at during a direct face-to-face conversation with the other agent or is an expression where the only constants appearing are manifest. In other words, the basis for agents to understand each other is ostension. Language and concepts can then be learned just from interaction with other agents.

 

We are applying this idea to learning concepts for categorization from experiences. The experience consists in acts where the agent is told about the subject of a phrase. More precisely, the phrase is a document description (the context of the URL, including its surrounding text) and the subject is one of the categories to which the document belongs.

 

Starting from a set of classified documents we build conceptual representations for categories, taking representations of documents as examples. The concept associated to a category is derived according to a principle of economy. Two mechanisms are involved:

 

The learning algorithm incrementally builds representations of the categories as sets of prototypes. The representation used for documents, derived from the context of the document rather than to its contents, guarantees that the representation we obtain for the categories is compact and semantically perspicuous.

    1. Taxonomy Building
    2. Our research will attack the challenge of combining training with expertise in the task of building taxonomies for use in categorization of documents.

      The relevant structures to be used in text categorization and understanding are: lexicons, thesauri, ontologies. Lexicons contain lexical information about words in a language, including root lemmas and categories. Thesauri describe relations like synonym, antonym. Ontologies describe concepts used in particular domains and their relations, like hyponym, hypernym, modality, causality.

      Our research will explore how to build or extend these corpora from experience.

      Terminologies can be extracted from document collections by analyzing the text through a POS tagger and determining recurrent phrases. Phrases will then be related to topics in the available ontologies. Simple word match is not sufficient in this step, so we must exploit information from thesauri: for instance "violent conflict" should match with "fight" and "struggle".

      Ontologies can be extended from analysis of documents: we performed a simple experiment where is-a relations are inferred from analysis of Web documents. The prototype was quite effective in particular for extracting information about proper nouns and acronyms.

      We plan to explore techniques like Support Vector Machines, which appear promising in learning how to separate data in clusters of data in high dimensional spaces with non-linear borders.

    3. Discovery of Categories
    4. As the material grows and evolves, the category tree itself must be revised. Adaptive agents should help in discovering the emergence of meaningful and useful new categories, by performing cluster analysis. Having categories described in a structured way, as collections of prototypes obtained via concept learning, may help toward this goal.

    5. Link Analysis
    6. Link analysis can provide useful information for categorization. In particular one can rank pages by popularity according to the number of links pointing to a page. Sites can be ranked as authoritative on a subject if they are linked from many pages devoted to that subject. A page is devoted to a subject if it contains many links related to the subject, i.e. the context of the link relates with the subject.

      Link analysis within a site can help identifying structural links (e.g. links to the home page) or individual home pages in a large community site.

    7. Applications

The following are possible applications of the techniques of adaptive agents:

    1. Web Computing
      1. Web Object Model
      2. In the area of Web Computing we plan to complete our work on a unified distributed object model, including support for polymorphic interfaces (IDL templates). We plan to deliver an open source implementation of a compiler for IDL templates.

      3. Interactive Discussion Forum

The Web is driven by the simple metaphor of navigation. In this metaphor Web users are mostly left alone with occasional help from tools with limited intelligence. It is essential to develop the Web towards supporting interpersonal communication, facilitating interactions with other subjects rather than only with objects. This entails developing simple interfaces for communication services and extending the means to access the Web through new devices, in particular hand held and wireless devices. We would like to explore tools that enable convenient and purposeful interactions among people, focusing in particular on tools for Interactive Discussion Forums. Interactive Discussion Forums are a mean to involve a community in an important issue for the community and reach decisions about how to address the issue. A forum is a form of structured discussion, more purposeful than informal communications, such as chats or mailing list, and can have an important role in achieving a sense of community and participation. A forum can be used also as part of the decision-making process within organizations.

An Interactive Discussion Forum infrastructure can be beneficial to a community of people, providing them with new means to communicate, exchange opinions and achieve their common goals.

A Forum is typically organized to discuss a certain issue. A moderator is present who invites a few experts to present a position statement and background material for the discussion. The forum involves an audience (either live or not) whose members take part in the discussion. To take part in the discussion, each participant notifies his/her intention or directly transmits his/her contribution. To enable coordination of the discussion, organization of the contributions and keeping track of the decision process, the interventions must follow simple rules of dialectics: each participant must classify his intervention according to a well-defined set of categories:

Since the whole forum is archived, this allows asynchronous participation to a Forum. The list of interventions will be part of an argumentation structure that keeps track of the relation among the interventions, among the issues (for instance when an issue subsumes another or when it is composed of sub-issues).

  1. Long Term Scenarios

Among the technical research priorities suggested by the US President's Information Technology Advisory Committee, we plan to contribute in the following areas:

  1. Resources

The activities of the group have been so far carried out by just two people from the Department and by largely relying on personnel paid through contracts from externally funded projects. This severely limits the extent of the activities, since new personnel has to be recruited and trained from scratch, without being able to build up expertise and critical mass. All the outstanding people that have been involved in the past in our projects has been recruited by leading research institutes (both national and international) or offered profitable jobs in companies.

 

We estimate that for the planned research at least three graduate student fellowships and two research assistant positions are necessary.

Funding for the activity will be obtained through the following contracts:

Equipment grants will hopefully continue from Sun Microsystems (Java Campus initiative) and from Hewlett-Packard (Internet Philanthropic Initiative).

New collaborations are being established:

  1. Short CVs
    1. Giuseppe Attardi
    2. Giuseppe Attardi is professor of Computer Science at the Dipartimento di Informatica, where he currently teaches Computer Graphics, Java Programming and Java Security.
      Prof. Attardi is involved with Internet both at the local level, as responsible of the center SerRA of the University of Pisa, and at the national level, and as a member of OTS GARR, the steering committee of Italian national research network GARR.
      Prof. Attardi is responsible for the national Web Cache service and national News Service.
      He is member of a working group of the Italian Ministry of Communications on the initiative for Internet and the development of the information society.
      He has been visiting scientist for three years at the MIT Artificial Intelligence Laboratory, where he developed Omega, a calculus of descriptions for knowledge representation based on taxonomies of concepts and participated to the development of the first graphics window system of MIT.
      He has been senior visitor at the International Computer Science Institute in Berkeley and at the Sony Research Laboratory in Paris.
      He has been project leader of the ESPRIT project P440 (MADS), project CHAMELEON, and group leader in projects APHRODITE, ITHACA, TROPICS, PoSSo, FRISCO and EUROsearch.
      He has worked on actor languages and concurrency, and developed ECoLisp, an Embeddable Common Lisp.
      He is active in the development and implementation of object-oriented languages, including CLOS as part of ECoLisp.
      In the ESPRIT project PoSSo, he has been responsible for the development of CMM (Customisable Memory Manager) a dynamic memory management system for C++.
      Prof. Attardi is an editor of Computational Intelligence and has served as member of several program committes, including IJCAI, ECAI, ECOOP and KR.
      Prof. Attardi is member of the board of directors of the Java Italian Association.

    3. Maria Simi

Maria Simi is associate professor of "Artificial Intelligence" at the University of Pisa. From 1978 to 1981, she has been visiting scientist at the MIT AI-Lab, where she worked in the Message Passing Group headed by prof. Carl Hewitt. She was co-founder of DELPHI SpA and team member of the ESPRIT Project MADS, "Message Passing Architectures and Description Systems" (1984-1989) and COST-13/21 "Advanced Issues in Knowledge Representation". From 1989 to 1992 she has been associate professor of "Informatics for Documentation" at the University of Udine. She is one of founding members of the Italian Association for Artificial Intelligence (AI*IA) and has been a member of its steering committee from 1988 to 1991. She has organized scientific events at the national and international level. She a member of the advisory board of the journal ESRA/Expert Systems research and Application and of the Editorial Board of the journal Archivi & Computers. She is the coordinator of the Computer Science Subject Area of the ERASMUS/SOCRATES at the University of Pisa. She has been doing research in the following areas:

  1. Publications

  1. G. Attardi and M. Simi, A formalisation of viewpoints, Fundamenta Informaticae, 23(2,3,4), 149-174, 1995.
  2. G. Attardi and T. Flagella, Memory Management in the PoSSo Solver, Journal of Symbolic Computing, 21, 293-311, 1996.
  3. G. Attardi and C. Traverso, Strategy-accurate parallel Buchberger algorithms, Journal of Symbolic Computing, 22, 1-15, 1996.
  4. G. Attardi, M. Gaspari, Multilanguage Interoperability, Computers and Artificial Intelligence, 15(6), 531-554, 1996.
  5. G. Attardi, M. Simi, Communication across Viewpoints, Journal of Logic, Language and Information, 7, 53-75, 1998.
  6. G. Attardi, T. Flagella and P. Iglio, A customisable memory management framework for C++, Software: Practice and Experience, 28(11), 1143-1183, 1998.
  7. G. Attardi and P. Iglio, Software Components for Computer Algebra, Proc. of ISSAC '98, 1998.
  8. G. Attardi, A. Cisternino, and M. Simi, Web-based Configuration Assistants, Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 12, 321-331, 1998.
  9. G. Attardi, S. Di Marco, D. Salvi, Categorisation by context, Journal of Universal Computer Science, 4(9), 719-736 1998.

Index Page