Proposers: Antonina Starita, Alessandro Sperduti
The proposers have experience in Artificial Intelligence methodologies, ranging from Expert Systems, to Soft-Computing approaches such as Neural Networks, Evolutionary Computation and Fuzzy Systems. This knowledge led to the development of new methodologies which have been exploited for the design of successful systems in different application domains. On the basis of their experience, the proposers recognize the need for a better integration between symbolic and sub-symbolic techniques. This is particularly true in consideration both of the fast growth of information sources due to the explosion in use of Internet and of the new sensory capabilities of advanced robotics devices. Besides to pursue improvements of current technologies for integration of the symbolic and sub-symbolic paradigms, during the short term period the proposers will investigate on the possibility of extending the computational capabilities of neural networks for the treatment of data structures as well as issues concerning the emerging Evolutionary Computation paradigm. The long term aim is to define computational frameworks, for the development of intelligent systems in different application fields, able to describe and to exploit all the significant results of Soft-Computing, including its integration with symbolic methodologies.
State of the Art and Trends:
In recent years, the field of AI showed a renewed, growing interest for the symbolic representation of knowledge, mainly due to the development of agents and multi-agent systems theory that has revitalized the field and has given new challenges for developing formalisms, methodologies and languages suitable for these systems. At the same time, the diffusion of telematic networking has allowed an extraordinary growth of information sources so that the combination of distributed and heterogeneous information has become a critical problem. A promising solution to the management, search and integration of these information sources seems to be the design and development of systems based on intelligent hybrid agents, able to extract knowledge from data of different nature and to reason about the acquired knowledge. In these systems each agent is specialized to interact with a specific source of information, but it is also able to cooperate with other components of the system, to find a solution to the users' requirements, retrieving information even if the actors are different and located in different places. The technologies available to build such systems span from conceptual models to multi-agent systems, to the languages and architectures necessary to implement them, to data mining and knowledge extraction methodologies, which are more and more based on statistical methods and biologically based computational paradigms such as neural networks, genetic algorithms and evolutionary programming.
The final goal would be to develop autonomous intelligent agents, really provided with nearly human capabilities, i.e., able to solve all the classes of problems where the information is complex, not clearly stated and the management of uncertainty is important. In order to reach this goal, methodologies incorporating both conceptual and sub-conceptual processes, naturally leading to the integration of symbolic, connectionist and/or other paradigms that are developing in the field of machine learning, are needed. This implies to fill the gap now existing among the symbolic approach, mainly developed to face symbolic tasks, and the other approaches, whose actual main results are able to solve the low level of interaction with the environment. Three possible lines to pursue this goal have been suggested. The first one is to develop fully integrated hybrid systems exploiting the potentialities of both symbolic and sub-symbolic methodologies now at hand. The second is to investigate new connectionist methodologies so to develop new neural architectures and algorithms allowing the treatment of structured data and their representation at higher levels than the current ones. The development of this approach can be greatly enhanced by exploiting graphical models, which are going to play an increasingly important role in the design and analysis of machine learning algorithms. The third line is to exploit the potentialities of the evolutionary computation, that is a paradigm growing very fast within the machine learning paradigm.
The first line of research has been developing since some years ago and the fruits of this approach are starting to emerge. It must be observed, however, that the methodologies and systems developed so far do not exploit a full integration between the symbolic and sub-symbolic components, thus resulting successful only on specific tasks. This is particularly true when considering application domains where the information sources are of different nature, such as in medical applications, where images, numerical measurements, and medical knowledge are present and must be integrated, as well as in robotics, where information on the different sensorial modalities must be integrated with action planning and movement control.
On the other hand, it must be taken into account that the sub-symbolic approach is evolving, considering that structured representations are ubiquitous in different fields such as knowledge representation, language modeling, and pattern recognition. Although many of the most successful connectionist models are designed for "flat" (vector-based) or sequential representations, recursive or nested representations should be preferred in several situations. One obvious setting is concept learning when objects in the instance space are graphs or can be conveniently represented as graphs. Terms in first-order logic, blocks in document processing, patterns in structural and syntactic pattern recognition, chemical compounds, proteins in molecular biology, and even world wide web sites, are all entities which are best represented as graphical structures, and they cannot be easily dealt with vector-based architectures. In other cases (e.g., language processing) the process underlying data has a (hidden) recursive nature but only a flat representation is left to the observation. Still, the architecture should be able to deal with recursive representations in order to model correctly the mechanism that generated the observations. The interest in developing connectionist architectures capable of dealing with these rich representations can be traced back to the end of the 80's. Today, after more than ten years since the explosion of interest in connectionism, research in architectures and algorithms for learning structured representations still has a lot to explore and no definitive answers have emerged. It seems that the major difficulty with connectionist models is not just representing symbols, but rather devising proper ways of learning when examples are data structures, i.e. labeled graphs that can be used for describing relationships among symbols (or, more in general, combinations of symbols and continuously-valued attributes).
A different perspective has been rapidly developing during the last years, i.e., Evolutionary Computation. Evolutionary computation includes a variety of adaptive algorithms for search and optimization, inspired by the Darwinian theory of natural evolution. The most known Evolutionary Algorithms (EAs) are Genetic Algorithms, Evolution Strategies, and Evolutionary Programming, which differ on the kind of structures they operate on and on the emphasis they put on alternative evolutionary operators. EAs solve problems that can be characterized as search for an optimal solution, according to a given criterion, in a space of alternative solutions. As heuristic methods, Eas are not guaranteed to find the optimal solution to the given problem. However, they have been shown very effective at solving difficult problems on which the available domain-specific knowledge is little and ad hoc solving methods are not known. The only knowledge required by Eas is an evaluation function that pairs each element of the search space with a scalar value that expresses the element's rating according to the search criterion.
As known, a very active research area of Evolutionary Computation is the use of Eas for Artificial Intelligence and Machine Learning. When a detailed domain-specific knowledge is not available or it is too expensive to obtain, hand coding the system's behavior or applying symbolic methods for reasoning and deduction can be very difficult, if not unfeasible. In such task domains, learning and adaptation through EAs allow the automatic definition of flexible and robust intelligent systems. Genetic Algorithms (GAs) are the EAs that have been most widely used as learning methods, generating a research area known as Genetic Based Machine Learning (GBML). The most important areas within GBML are: Classifier Systems (CSs), Genetic Programming (GP), Evolutionary Neural Networks (ENN), Evolutionary Robotics.
Moreover, GAs have also been successfully used to evolve classification theories for concept learning, decision trees, and Fuzzy systems (evolution of membership functions and/or of rules).
GBML is a relatively new field and offers a variety of research opportunities, both at the theory level, since GAs still lack a solid theoretic background, and at the application level; i.e.: robotics, sensory-motor control, navigation, signal processing, feature extraction, prediction, data mining, classification, concept learning, decision making in general (from personal assistants to internet search agents).
The proposers have experience in all the three lines of research discussed above, and starting from this experience, they aim to contribute to the development of better soft-computing based methodologies for the development of intelligent systems applied to a variety of application domains, including medical applications, chemistry, and robotics, and to state to which extent it may be possible to realize the cooperation and the integration of these paradigms by suitable methods, formalisms and languages.
Relevant research activities at the Department
The proposers have developed several systems that integrate multi-paradigm techniques. Specifically:
· Diabetic Retina Analyzer (DRA) System.
The aim of the DRA System is to automatically locate anomalies of the retina in diabetic patients. The current version of the system can detect and localize microaneurysms, hard exudates, and cotton-wools. In the system, image processing techniques are integrated with neural networks. The performance of the system is boosted by a committee of networks. The use of committees in the automatic detection of microaneurysms and hard exudates leads to a significant improvement of the performance of the system. Moreover, we increased the performance by using invariants, new selection algorithms, and sequential cooperation among single classifiers. This system has been developed with the collaboration of the ``Centro di Diabetologia'', Pisa University. The last version of the system will be used by the ``Clinica di Diabetologia'' of the University of Siena, under a grant from the local Health Service (ASL).
· A System for Paleographic Inspections (SPI).
The morphological analysis of scripts is an important component of Paleography, however, because of the lack of any explicit and well stated definition of which morphological feature should be attributed to which writing style, no universally accepted set of rules exists which characterizes this kind of analysis. We developed an adaptive support system that, on the basis of images of ancient documents, generates models of different writing styles, then used to relate new unknown documents with the ones elaborated by the system. The techniques used by the system are based on Tangent Distance, which allows the system both to be invariant with respect to local transformations and to capture, through learning procedures, the morphological features that characterize the writing style of a specific document. New learning algorithms involving Tangent Distance have been devised and implemented into the system. This system has been developed in collaboration with the ``Dipartimento di Medievistica'' of Pisa University and supported by a three years long grant by the University of Pisa.
· NEUREX: a tutorial expert system for the diagnosis of neurogenic diseases.
This system is a tutorial expert system for neurological clinics which can emulate the diagnostic process of neurogenic diseases by an expert neurologist, assist users, that may be students of the Neurological Clinic or not experienced medical people, in planning the optimal sequence of NG and EMG tests, interpret the results of these tests and help the users to achieve the most suitable diagnosis. The system is an integrated hybrid system made of symbolic and sub-symbolic agents, cooperating each other to solve the problem posed by the user, following the cycle: hypothesis-test-proposed solution, until the satisfying answer has been found. This system has been developed in the frame of a national, three years, project funded by the Italian National Council of Research and it is now under evaluation for its industrial exploitation by the Oxford Medical Instruments, Old Woking, UK.
The main distinctive features of our work in adaptive processing of structured information can be summarized in the following points:
· New approach to adaptive processing of data structures by neural networks.
We have shown that neural networks can represent and classify structured patterns. The key idea underpinning our approach is the use of recursive neurons, which are essentially a generalization to structures of recurrent neurons. By using recursive neurons, all the supervised networks developed for the classification of sequences, such as Back-Propagation Through Time networks, Real-Time Recurrent networks, Simple Recurrent Networks, Recurrent Cascade Correlation networks, and Neural Trees can, on the whole, be generalized to the treatment of DOAGs (Directed Ordered Acyclic Graphs).
· Theoretical study of neural networks for structures.
We have studied different computational and complexity aspects of the new proposed neural network models for the processing of structured data. Specifically, we studied the computational capabilities of some neural network models with respect to the classification of structures by relating them to Frontier-to-Root Tree Automata (FRA). Moreover, we gave upper and lower bounds on the number of units needed to implement a given FRA with output (FRAO). Finally, we have analyzed the efficiency of learning the membership of DOAGs (Directed Ordered Acyclic Graphs) in terms of local minima of the error surface, giving sufficient conditions for the absence of local minima of the error surface, based on the presence of either discriminant symbols or pointers.
· Real-world application of neural networks for structures.
We applied neural networks for structures to Quantitative Structure-Property Relationship (QSPR) of Alkanes (predicting the boiling point) and Quantitative Structure-Activity Relationship (QSAR) of a class of Benzodiazepines (predicting the biological activity towards the Benzodiazepine/GABA A receptor). The obtained results show that the application of neural networks for structures to QSAR/QSPR tasks allows the treatment of different computational tasks by using the same basic representations for chemical compounds, obtaining improved prediction results with respect to traditional equational approaches for QSAR and competitive results with respect to `ad hoc' designed representations and MLP networks in QSPR.
· Unified approach based on graphical models.
We suggested a framework for unifying adaptive models like artificial neural nets and belief nets for the problem of processing structured information. In particular, in our framework, relations between data variables are expressed by directed acyclic graphs, where both numerical and categorical values coexist. In particular, we studied the supervised learning problem as the problem of learning transductions from an input structured space to an output structured space, where transductions are assumed to admit a recursive hidden state-space representation. We introduced a graphical formalism for representing this class of adaptive transductions by means of recursive networks, i.e., cyclic graphs where nodes are labeled by variables and edges are labeled by generalized delay elements. Structures are processed by unfolding the recursive network into an acyclic graph called encoding network. In so doing, inference and learning algorithms can be easily inherited from the corresponding algorithms for artificial neural networks or probabilistic graphical model.
The research in this field is pursued within the project ``Adaptive Processing of Data Structures'', granted from the Australian Research Council (ARC), Large Grant for 1998/2000. The application of neural networks for structures to Chemistry is developed in collaboration with the Pharmaceutical Department of the University of Pisa and a national proposal for a project involving also the Chemistry Department of the same University is under evaluation.
Our research within GBML has focused mainly on CSs. Part of our research activity has been devoted to the credit assignment problem in CSs, and to the integration of CSs with the connectionist paradigm, as well as with Reinforcement Learning methods developed independently from Genetic Based Machine Learning. We analyzed the rule sharing problem in CSs and developed a credit assignment method, the Q-Credit Assignment (QCA), based on Q-learning. As an extension of this work, we analyzed and experimented the use of neural-based feature extraction techniques to improve the QCA. A further area of our research on CSs has concerned the study of parallel CSs. We have defined and analyzed a general parallel CS model, named Parallel Cooperative Classifier System, where a pool of CSs learn in parallel and cooperate by exchanging rules or set of rules. In particular, we analyzed the relationship between cooperation among the CSs in the pool and cooperation among the rules within the knowledge base of individual CSs. We then developed specialized cooperation mechanisms among CSs that take into account and promote rule cooperation. Recently, we have extended our research interests to GP. In particular, we have analyzed the problem of Automatic Subroutine Discovery.
Finally, the proposers are involved in European Projects for the exploitation of Internet for tutoring and dissemination of medical knowledge.
Short Term Plans and Expected Results:
In the short term period we would like to extend and improve the performance of the described systems by including evolving Pattern Recognition and Machine Learning techniques, such as Support Vector Machines and Boosting algorithms. For example, the DRA system can be extended to the treatment of additional anomalies, and its performance should profit from the application of the mentioned techniques.
Concerning the adaptive processing of data structures, we will proceed to develop the following issues:
· Unsupervised Learning
Up to now we have studied the generalization of supervised neural network algorithms to the treatment of structured data, showing how to exploit emerging theoretical concepts, such as graphical models, to extend this style of computation to probabilistic models in general. As a matter of fact, a very promising line of research would be to include in this study the unsupervised learning paradigm. There are several good reasons for doing this. In fact, when there is no external teacher or critic to oversee the learning process, unsupervised learning can be used to capture statistical regularities of the input data. Which specific type of regularities is to be captured can be controlled by usual approaches, i.e., Competitive Learning, Self-organizing Maps, information theoretic based learning (e.g., maximization of the mutual information), and finally Independent Component Analysis (ICA). Basically, the output of the application of an unsupervised learning technique can be understood as the construction of a metric on the input domain. When considering structured data, such metric can be used to define a ``quality'' metric on structured objects. There are several applications domain where it is very difficult to define formally and a priori such metric, while it is possible in principle to apply an unsupervised learning algorithm on a data set to build up an approximate solution. Just to mention one example, one of the major goals of software engineering is to evaluate the quality of the software. This evaluation is usually based on metrics that are correlated with properties of interest. A number of metrics (see e.g. McCabe complexity) have been developed which try to codify the above properties of a (portion of) program numerically. These features are usually based on an intermediate representation which has the advantage of being (in some sense) independent of the specific language, while preserving the essential static and dynamic aspects of the program. One example of intermediate representation is given by dependence graphs (e.g., control flow graphs, control dependence graphs, data dependence graphs, instance dependence graphs). Once the set of input graphs as been defined, a metric can be learned on several possible programs and used to estimate the complexity of a portion of software. The aim is to use this metric as an indicator of the quality, testability, reusability, and maintainability of the program.
Another example of application domain where this type of metric can be useful is in robotic domain, where conceptual graphs representing information about complex objects, which are present in the environment, need to be compared by analogy for a correct interaction of the robot with the environment. The extension of unsupervised learning algorithm to the treatment of structured data do not present, in principle, any difficulty for some learning techniques, such as Competitive Learning and Self-organizing Maps, since the use of recursive neurons in place of standard neurons do not modify radically the used learning schemata. However, techniques based on Information Theory and ICA, deserve a deeper study in order to understand how them can be adapted to the treatment of structured data.
· New Supervised Learning Algorithms.
New learning algorithms, exploiting already understood techniques, can be developed in the short term. For example, the Expectation-Maximization (EM) algorithm can be extended to the processing of data structures and also applied within a neural network context. Moreover, it is possible to extend the neural networks learning algorithms defined for data structures to the learning of DOAGs to Tree transducers.
· Study of Non-stationary and Non-causal Models.
In their original formulation, neural networks for the processing of structures are based on two assumptions: stationarity and causality. These assumptions, however, may be restrictive for those problems that do not satisfy them. Moreover, non-stationary models could be particularly suited to treat those problems for which the representation of the input structure presents different levels of abstraction and thus particularly suited to multiple context processing. Non-causal models, on the other hand, can be useful for those prediction problems where the computational task is to study the role of a specific sub-component, within a given family of structures, according to the context in which it is found. Non-stationary models can be defined by making the weights dependent on the input or, in general, from the computational context. The strategy we want to adopt can be understood as the adaptive definition of categorization functions for the input allowing the dynamical selection of the processing parameters (i.e., the weights), and, if it is the case, the selection of a different architecture for the network, according to the specific data in input. Example of categorization functions are: recognition of the class of membership of the label attached to the current graph node; recognition of the depth level in which the current graph node is found; recognition of substructures corresponding to substituents which are not sampled enough and/or which must be treated in a special way. Moreover, as already mentioned, the present neural network models for processing of structures are causal, i.e., the output associated to a given node of the graph depends (is caused) only by the nodes processed up to that moment and which are in a specific topological relationship (descendant nodes) with the input node. Unfortunately, causality is not always true, since the role played by a graph node may depend on the global context (i.e., both ancestors and descendants) in which it is found. The strategy we will adopt for the realization of a non-causal model is to extend the network architecture with connections gathering information from all the nodes defining the input structure. This is technically possible for constructive algorithms which build up the network architecture by incrementally updating partially defined networks which by themselves are able to represent and to process the whole input structure. When considering non-constructive algorithms, specific topologies for the network must be considered.
This research will start soon, however, we preview that its full development will require several years. In the short term we will define the categorization functions and will experiment with a non-causal version of the Cascade-Correlation algorithm.
· Analysis of the internal representations.
It is well known that the internal functioning of neural networks is not easily understandable by human experts. Thus, even having available a neural network able to perfectly predict the physical or biological property of interest, the knowledge acquired by the neural network is not directly accessible by external observers (black box). In the neural network literature different approaches for the extraction and representation of knowledge from neural networks has been developed. These approaches refer to standard neural network models, so they need to be modified and adapted to our models. Specifically, the following techniques promise to return positive results in the short term: PCA, sensitivity analysis, extraction of symbolic rules. We already started to experiment with PCA and will soon apply the remaining techniques.
As testbed for the extensions we propose to study in the short term, we will use a set of problems from Chemistry. Specifically, the QSAR of the following classes of compounds will be considered: (1) Bdz derivatives showing non-classical activity, (2) Non-Bdz ligands for the GABAA/Bdz receptor (indol derivatives acting as agonists, inverse agonists and antagonists), (3) adenosine and 8-aza-adenosine derivatives acting as antagonists of the human A1 adenosine receptor.
In the field of Evolutionary Computation we want to face a fundamental problem: in CSs there is the need of creating and maintaining a variety of alternative rules, which solve distinct subtasks and cooperate to define a complex global behavioral strategy. This problem has been tackled in the literature by extending the GA with niching methods. Encouraging results have been reported with stimulus-response (i.e. memoryless) CSs and CSs with memory based on an alternative model. As an extension of our research on CSs, an interesting direction is the investigation of such techniques in the classical general-purpose model with memory. Besides experimenting known niching methods for GAs, we are interested in investigating new strategies to promote an automatic partition of the rule base into cooperating rule subsets. In the short term, we will experiment a Pitt-style CS (evolution at the level of rule sets) where the rules are partitioned into fixed-sized classes according to their condition and their action (external/internal). Our goals are: (1) to develop techniques for a dynamic and automatic resizing of these classes, and (2) to apply these methodologies to Michigan-style CSs (evolution at the level of rules).
Long Term Scenarios:
In the long term period we think that the number of different techniques and methods belonging to the three research lines we discussed before will increase exponentially. So we foresee the need for a synthesis of the most significant results, both within and among the different research lines.
For example, several different neural network models have been proposed for processing static and dynamic (sequences and, more recently, data structures) data. Some of them are not computationally sound, however they are very efficient in performing specific tasks. Other models are particularly suited to deal with well known learning problems, such as long-term dependencies, while others have been developed to allow the user to easily insert a priori knowledge into the network. Most of them assume stationarity and/or causality. When looking at this variety of models and working assumptions, it is not difficult to realize that in several real-world applications not all assumptions are valid, e.g., stationarity and causality, and almost all the nice features of different models are needed at once. Thus it is clear the need for a systematic way of unifying the experience cumulated during these years of study on recurrent (and more recently, on recursive) neural networks. Our suggestion is to try to develop a formal and precise computational model for neural processing, i.e., to define a Neural Abstract Machine. The advantage in developing such computational tool would be to have the possibility to define and run a generic neural network model by simply writing a few lines of ``neural code''. This would allow the user to easily insert a priory knowledge, as well as to control, at different levels of abstractions, the learning rule(s) to be adopted in a specific model. New models could be developed by simply specifying, in a graphical form, dependencies among hidden and observable variables. As a further extension, it would be valuable to extend the definition of this machine to the adaptive processing framework already defined by the proposers so to include probabilistic techniques, such as bayesian networks. This would facilitate the development of models combining statistical and symbolic techniques with neural processing. It must be stressed that even if many neural networks simulators have been developed (e.g., Aspirin/Migraines, Rochester Connectionist Simulator, NNSYSID, Stuttgart Neural Network Simulator), as well as programming languages for neural networks (e.g., EpsiloNN, Neural Simulation Language Version), they implement a restricted set of specific models for dealing mostly with static data.
Another example comes from the field of modeling of perceptual information. We work in cooperation with the Advanced Robotic Group, at Scuola Superiore di Studi S. Anna of Pisa, where it is now under development a new robotic device constituted by a mobile base on which is mounted a robotic head, equipped with a "binocular" vision system, a control system for the movements of both the mobile base and the artificial "head". This artificial head may constitute a stimulating testbed to experiment the computational models we are investigating. We are interested in developing hybrid systems (evolutionary + neural + fuzzy) to process the sensory information and solve low-level and high-level decision tasks on such a device, cooperating with intelligent agents, driving the decision process. Alternative models can cooperate at the same level (e.g., evolution of rules, or fuzzy systems, or NN to solve the task at hand), or at different levels (e.g., a rule-based evolutionary system could be used to solve a decision task on the information in input previously processed by symbolic, neural and/or fuzzy systems).
CV of Antonina Starita
She has been Associate Professor at the Computer Science Department of the University of Pisa since 1980. Her research interests are in the area of "Artificial Intelligence and Robotics", paying special attention to: Knowledge Engineering, Knowledge Acquisition, Knowledge Representation, AI methodologies in Medical Applications; Neural Networks: supervised and unsupervised learning algorithms, representation of data structures by neural networks (reduced representations), gating connections, hybrid systems: Integration of Symbolic and Connectionist Processing. Integration of sensory-motor data in artificial systems, Genetic algorithms. Evolutionary computation.
She has been the responsible person of several research Projects at National and European level and now is the responsible person for the: VAMA project, in collaboration with the Istituto Superiore di Sanit?, and currently she is: Contractor of the European Project Woman (HC 4026) within the IV Program, and Associate Contractor of the European Project MODASPECTRA, within the IV Program.
1. Sona D., Sperduti A., Starita A .: Discriminant Pattern Recognition Using Transformation Invariant Neurons , accepted for publication on Neural Computation.
2. Dess? A., Giani A., StaritaA.: An Analysis of Automatic Subroutine Discovery in Genetic Programming, accepted for presentation at GECCO-99.
3. Sperduti A., Starita A.: Supervised Neural Networks for classification of Structures, IEEE Trans on Neural Networks. Vol. 8, No. 3, pp. 714-735, 1997
4. Starita A., Majidi D., Giordano A., Battaglia M., Cioni R.: NEUREX: A tutorial expert system for the diagnosis of neurogenic deseases of the lower limbs, AIM Journal, 7, pp.25-36, 1995, Elsevier Publishing Co., The Netherlands.
5. Sperduti A., Starita A.: A Neural Network Model for Associative Access of Structures, International Journal of Neural Systems, Suppl. Vol. 6,pp. 189-194, 1995.
CV of Alessandro Sperduti
Alessandro Sperduti received his education from the University of Pisa, Italy ("laurea" and Doctoral degrees in 1988 and 1993, respectively, all in Computer Science.)
Since 1998 he is an Associate Professor at the Computer
Prof. Sperduti participated to several national and European research projects. Currently he is involved in the project ``Adaptive Processing of Data Structures'' funded by the Australian Research Council (ARC), Large Grant for 1998/2000. Within Pisa University, prof. Sperduti is involved in several interdisciplinary projects. Prof. Sperduti research interests include pattern recognition, image processing, neural networks, hybrid systems. In the field of hybrid systems his work has focused on the integration of symbolic and connectionist systems for the processing of structured information. He contributed to the organization of several workshops on this subject and he served in the program committee of conferences on Neural Networks. Alessandro Sperduti is the author of around 60 refereed papers mainly in the areas of Neural Networks, Fuzzy Systems, Pattern Recognition, and Image Processing. Moreover, he taught several tutorials within international schools and conferences, such as IJCAI97 and IJCAI99.
1. Gori M., Kuechler A., Sperduti A ., On the Implementation of Frontier-to-Root Tree Automata in Recursive Neural Networks, to appear on IEEE Transactions on Neural Networks .
2. Bianucci A.M., Micheli A., Sperduti A., Starita A., Application of Recursive Cascade-Correlation Networks to Chemical Structures, to appear on Applied Intelligence .
3. Frasconi P., Gori M., Sperduti A ., A general framework for adaptive data structures processing, IEEE Transactions on Neural Networks, Vol. 9, No. 5 , pp.: 768-786 , (1998) .
4. Sperduti A., Starita A., Supervised Neural Networks for the Classification of Structures, IEEE Transactions on Neural Networks, Vol. 8, n. 3 , pp.: 714-735 , (1997).
5. Sperduti A., On the Computational Power of Neural Networks for Structures, Neural Networks , Vol. 10, n. 3 , pp.: 395-400 , (1997).
CV of Fabrizio Baiardi
Since February 1989 he has been an associate professor with the Dipartimento di Informatica, Universita' di Pisa. His main research interest is the definition of general purpose parallel architectures. In particular, his research is focused on the design and the evaluation of run time environments and of other programming tools for concurrent programming paradigms, and on the integration of hardware and firmware mechanisms in the run time environment of concurrent programming paradigms. He has been responsible of the research activity of the Dipartimento di Informatica in the National CNR Program on "Computer Systems and Parallel Computing", subproject "Parallel Architectures" and , in the same program, he coordinated the activities on "Highly Parallel Architectures and Operating Systems". He currently coordinates the activities on the distributed shared memory subsystem in the PQE2000 project.
He has published more than 40 papers on parallel and high performance computing.
1. F. Baiardi, A.Candelieri, L. Ricci , Massively Parallel Execution of Logic Programs: A Static Approach , Int. Jour. of Sistem Architectures, Febbraio 1996.
2. F. Baiardi, M. Jazayeri , P3M: a Virtual Machine Approach to Massively Parallel Computation, 1993 Int. Conf. on Parallel Computing, Chicago, USA, Agosto 1993.
3. F.Baiardi, L.Ricci, A Static Analysis to Order Instructions of a Concurrent Program, Int. Conf. On Parallel and Distrib. Tech. and Applic., Athens, Georgia (USA), Novembre 1995.
4. F. Baiardi , A. Candelieri , L.Ricci , Congestion Prevention by Bounding in Distributed Memory Systems , World Transputer Conference, Sept. 1994.
CV of Diego Sona
Diego Sona is a PhD student (XIII cicle) working on Pattern Analysis and Recognition, Neural Networks, Probabilistic Graphical Models, Data Structure Analysis, Internet Technologies.
1. Sona D., Sperduti A., Starita A., Discriminant Pattern Recognition Using Transformation Invariant Neurons, accepted for publication on Neural Computation.
2. Sona D., Sperduti A., Starita A., Tangent Models: Comparing the TD-Neuron Constructive Algorithm versus SVD Based Algorithms , WIRN-98, Proceedings of the 10th Italian Workshop on Neural Networks, pp. 377-382, 21-23 May, 1998.
3. Sona D., Sperduti A., Starita A., A Constructive Learning Algorithm for Discriminant Tangent Models, Advances in Neural Information Processing Systems (NIPS), Vol. 9, pp. 786-792, Cambridge MA, 1997, MIT Press.
4. Masulli F., Sona D., Sperduti A., Starita A., Zaccagnini G., A System for the Automatic Morphological Analysis of Medieval Manuscripts, Journal of Forensic Document Examination, Vol. 9, pp. 45-55, Fall 1996.
Collaborators (by contracts on research funds): Antonella Giani, Elena Palanca, Katuscia Cerbioni, Alessio Micheli, Fabio Aiolli.
External Collaborators: Prof. Paolo Dario - dr. Eugenio Guglielmelli - dr. Davide Taddeucci (SSSUP); Prof. Marco Gori (Dip. di Ing. Inf., Univ. di Siena); Prof. Paolo Frasconi (Dip. di Ing. Elettr. ed Elettron., Univ. di Cagliari); dr.ssa A.M. Bianucci (Dip. di Sci. Farma., Univ. di Pisa); dr.ssa A.M. Colla (ELSAG, Genova); dr.ssa D. Majidi (Synapsis srl), dr. R. Cioni - dr. F.Giannini (Clinica Neurol., Univ. di Siena), Prof.ssa Francesca Rossi (Dip. di Mat. Pura ed Appl., Univ. Di Padova).
Keywords : Neural Computation, Evolutionary Computation, Pattern Recognition, Artificial Intelligence, Robotics