Originated in the 70's, security of computer systems became soon an essential requirement for all applications, especially in the last decade, due to the widespread diffusion of distributed systems and networks. Being concerned with the dynamic behaviour of systems, the security problem is in general undecidable, and several techniques have been devised to study and establish properties sufficient to guarantee that a system is secure. Indeed, a lot of different properties and measures have been defined, even if security cannot amount to their simple sum.
A particularly relevant class of security (and secrecy) properties concerns establishing that some information, devised to be confidential, can neither be made available to, nor accessed nor altered by unauthorized users. Typically, this may happen when data are exchanged between principals (users or computers) during the execution of a protocol. A classical way of ensuring this kind of properties is to encrypt the exchanged messages, so that only the owners of the relevant cryptographic keys can send and read them, according to the protocol they trust in. This also raises the problem of authentication, i.e., of ascertaining the principal's claimed identity or the origin of a message. However, in order to guarantee security, besides relying on cryptography, one should also analyse and prove the correctness of the protocol used. Indeed, there is an increasing number of protocols that are reported to have security flaws, both on academic proposals and on industrial applications.
A large number of meetings devoted to security issues and to its foundations witness the vitality of the field. Among them, we mention the following well-established conferences: IEEE Computer Security Foundations W/S, IEEE Security and Privacy, ACM Computer and Communication Security. Also, special events have been organized, like the DIMACS W/S on Design and Formal Verification of Security Protocols, and many workshops have been hosted by broader-in-scope conferences, like the one on Security Protocol Verification, satellite to LICS.
Our claim is that a careful formal design and analysis, and tools supporting them, will greatly help in the production of protocols and also of programming languages for distributed, concurrent and mobile systems.
Current research on the use of formal methods for the analysis of cryptographic protocols and applications, for the study of secure mobile code, as well as for information flow security use at least the following formal approaches: Process Algebras, Static Analysis and Type Theory, Modal Logics, First Order Logic with Induction, Model Checking and Theorem Proving. Below, we will mainly concentrate on Process Algebras and Static Analysis.
A cryptographic protocol is described as a process in a process algebra, such as CSP, CCS, LOTOS or, more recently in the -calculus  and the spi-calculus . The desired security property is then studied by checking its specification on all its computations. Remarkably, the semantics of process algebras is often given in a logical style, by defining a transition system (akin to a graph with labelled arcs, whose nodes represent states). So computations are formally defined as paths in the transition system. Also, there are various notions of (bisimulation based) behavioural equivalence defined on transition systems, which facilitate both the statement of properties and their check. Moreover, the transition system associated with a protocol has often a finite number of states, in which case the analysis is mechanizable and provides complete answers. Considerable research has been done in recent years using various process algebras (without mobility) and equivalences, e.g., to establish properties about the information flow and to detect flows in protocols [9,11]. More recent work extends the above to study the foundations of secure mobile code; see e.g. the papers on the spi-calculus , and on the ambient-calculus .
Recently, types have been used to express the security features that the objects of a system should have. E.g., identifiers representing data are assigned the type secret or public; identifiers representing principals have types representing the capabilities they may have. Among the many proposals made in the last years, particularly relevant here are Abadi's  and Hennessy&Riely's [1,10] who typecheck mobile code, and Volpano&Smith's  who cope with syntax-based information flow analysis of programs.
Control Flow Analysis  is intimately connected with Type Systems: both aim at ensuring the well-formedness of programs at compile-time. Essentially, Control Flow Analysis predicts safe and computable approximations to the set of values that the objects of a program may assume during its execution. These techniques have recently been applied to check security of mobile code in , co-authored by the proposer.
While Type Systems are intended more as a specification of some aspects of system, notably security, Control Flow Analysis is more oriented towards the construction of tools that automatically support code analysis and optimization - in our case the security checks. Also, program analysis is generally more efficient than program verification, and for that reason more approximate, because the focus is on the fully automatic processing of large programs.
Together with researchers at the Univ. of Venezia and Verona (I), the proposer also studied message authentication . This property demands that the receiver of a message should be able to ascertain its origin. We took a dynamic point of view. We used an enhanced semantics of the -calculus, that offers a kind of ``authentication primitive''. Indeed, the new semantics provides each sequential process of a whole system with its own local space of names. Availability of localized names, which cannot be forged by users, guarantee by construction that a message has been generated by a given entity. Message authentication is therefore immediate.
Pierpaolo Degano and his italian associates work on security issues in national projects, supported by the C.N.R. and by the Italian Government. The proposer is member of the newly established IFIP Working Group on Theoretical Foundations of Security Analysis and Design.
Our research proposal will address security problems both from a static and from a dynamic point of view. The common framework is the operational approach to semantics and the related notions of behavioural equivalences. The first stream will find its basic techniques in the theory of Control Flow Analysis, and more generally in Flow Logic, and in other static techniques; the second one will exploit certain forms of enhanced operational semantics. We start with the line about static analysis.
The properties of confinement and of no read-up/no write-down mentioned above are part of more general models for security, that aim at protecting classified information and at preventing a system from unauthorized accesses . We plan to extend the properties considered in [5,6] and to exploit Control Flow Analysis for statically studying other properties, analysed in the literature mainly using dynamic techniques.
An example of a stronger property that we wish to investigate is the full notion of the no read-up/no write-down property. The general formulation requires that also data have levels of classification. Processes with low level clearance are then not allowed to access highly classified data, i.e. they can neither possess, nor send nor receive them. While this kind of property can be studied in the framework settled in our previous papers, other properties require to use and to develop more elaborated techniques.
In particular, we are interested in tracing more accurately the flow of information and in checking the so-called noninterference property. In its more drastic version, this property requires that at run-time a secure principal is neither affected by communications with a potential attacker, nor the attacker can deduce any information about the principal by looking at its computations. In other words, the principal behaves well, no matter the context in which it is plugged. A static formulation of the noninterference property may require to use stronger analyses than the reachability analysis typical of the Control Flow Analysis used so far. These more elaborated analyses could benefit, e.g., by some information about contexts in which the objects of a protocol are active. Also, we could use behavioural equivalences, e.g., the ones based on bisimulation or testing, to compare the static formulation of the noninterference property against the dynamic one.
Another research direction will consider the spi-calculus, a process algebra with explicit primitives for encryption and decryption. We feel this line of work particularly relevant, because in the spi-calculus one can naturally express and study cryptographic protocols. This calculus neatly distinguishes between data, that may represent self-contained secrets, and channels that may represent communication capabilities. Because of this, a static analysis for the spi-calculus should incorporate some components typical of Data Flow Analysis. Indeed, one has to check the way data are constructed, encrypted, decrypted and tested. Work in progress shows that the part of the analysis focussing on control is instead quite similar to the one developed for the -calculus. A typical property that we intend to study is message authentication, besides variants of the properties mentioned above, among which the confinement property. At this point a natural problem arises of comparing these results with Abadi's, in particular with his notion of ``no leaks'' which is checked using a type system. Expected results will answer to questions like: ``which technique gives more precise approximations?'' or ``which technique has a better complexity?''. This issue is part of a more ambitious research goal: comparing Control Flow Analysis with Type Systems and with other static techniques, in primis Abstract Interpretation .
The second research line will exploit structural operational semantics enhanced with features that make it easier to study specific behavioural aspects of processes. We shall continue the work on authentication mentioned in the previous section, still using the semantics offering localized names. More in detail, we would like to carry over the spi-calculus the way names are kept localized and our authentication primitive. By using them, one could specify a protocol that enjoys the wanted authentication property by construction, yet at a rather abstract point of view. The specification can then be stepwise refined till we obtain a correct protocol that does not use our primitive. So, our proposal could be seen as a reference for the analysis of ``real'' protocols. We plan to study in which cases the derivation of secure protocols can be mechanized, although we are aware that this is not always possible. Therefore, jointly with a mechanical derivation, we plan to exploit proof techniques and behavioural equivalences, like those devised for the spi-calculus. The idea is to adapt those equivalences to our processes; then the check for authentication should be done by comparing specifications level by level, starting from the ``secure by construction'' version of the protocol, where the authentication primitive occurs.
The research lines discussed above have several potentialities, both in the field of program and system semantics and analysis, and in the development of programming languages.
As said above, the techniques for statically analysing processes can be made more accurate. One way is by enriching them with information about data flow; another way is by using contextual information to confine the effects and the values of the objects of a system, be they data or control points. A third line of a long term research concerns a more faithful description of cryptographic protocols to analyse the risk of security violations; typically, one should devise static analyses that take care of a probability that keys have to be broken.
Enhancing the precision of the obtained results can however affect the efficiency of mechanical tools that perform the analysis (the control flow analysis for the -calculus has a low polynomial complexity ). Having efficient tools is mandatory, both for constructing the solutions to the analysis, which often are solutions of large sets of constraints, and for experimenting in simple real cases the actual applicability of the analysis we propose. A careful study is therefore needed to design efficient tools and to balance their efficiency with the precision of the analysis
For similar reasons, we should compare the precision and the efficiency of our proposal with those of other static techniques. We already mentioned above the comparison with Type Systems and Abstract Interpretation; other analysis techniques should also be examined.
Another topic to be investigated concerns the assessment of our approach to deal also with other calculi and languages. Preliminary results on the spi-calculus and on Mobile Ambients calculus  suggest that it is robust. Further research is needed to confirm our first impressions, and even more to consider real languages, like Java.
Preliminary to the study of real languages is the extension of our techniques to calculi that support explicit mobility of code. These extensions will permit to study security properties connected with capabilities. For example, one may wish to statically check whether a mobile piece of code will never act maliciously when hosted in a site; or, symmetrically, one would like to guarantee that a mobile agent will never be attacked by its host.
Most of the research topics sketched above have aspects that require to develop a suitable operational semantics and related dynamic techniques; think for example to those extensions of standard transition systems where actions have associated a duration rate, drawn from a probabilistic distribution. Also the design of efficient and increasingly precise analysis may benefit from enhanced operational semantics. E.g., causality should help giving less approximate results, because the flow of information can be traced between a sub-part of processes and another, only when the firsts affects the behaviour of the second. Finally, the design and implementation of a prototypal tool is foreseen. Of course experiments are needed on the feasability of mechanically deriving specifications of secure protocol, and to support reasoning on them.
Results from these studies might influence the design of programming languages that offer secure ways of exchanging information and permit mobile and nomadic computing. This is our ultimate, very long term goal.
Pierpaolo Degano only participates in this project; his research interests cover also other basic topics, mainly on Security of concurrent and mobile systems, Semantics of concurrency, Methods and Tools for program verification, Programming tools.
Pierpaolo Degano served as
- reviewer for many international journals and as guest editor of Theoretical Computer Science and the ACM Computing Surveys;
- Program Committee member of several conferences and as Program Chair of ICALP'97 and of ICTCS'98;
- member and often Chair of the Organizing Committee of many conferences;
- member of the Steering Committee of TAPSOFT, of the ETAPS, of the Italian Chapter of the EATCS.
Also, Pierpaolo Degano has been and is involved, often as a project leader, in national projects, supported by the C.N.R. and by the M.U.R.S.T., and in international projects, among which: ESPRIT - Parallel Architectures and Languages, n. 415; ESPRIT BRAs CEDISYS and LOMAPS; CEC Working Groups GRA-GRA and MASK; joint project Università di Pisa and HP.
Presently, he is member of the IFIP TC1 WG 1.7 on Theoretical Foundations of Security Analysis and Design.
Chiara Bodei participates only in the present project; she is mainly interested in Security of concurrent and mobile systems and Semantics of concurrency.
Chiara Bodei served as referee for many conferences and as member of the organizing Comittee of ICALP'97; she is involved in national projects supported by C.N.R. and M.U.R.S.T., and has been involved in the ESPRIT BRA LOMAPS.