### List of technical reports of year 1996

Boraso, M. and Montangero, Carlo and Sedehi, H.
The accurate prediction of software development costs may have a large economic impact. As a consequence, considerable research attention is now directed to understand better the software development process. The objective of this paper is to provide an experimental evaluation of the applicability, universality, and accuracy of some algorithmic software cost estimating models (COCOMO, TUCOMO, PUTNAM, COPMO, ESSE, and Function Points). Data on nine Italian Management Information Systems projects were collected and used to evaluate the performance of the models. The evaluation of the estimates was based on the Mean Magnitude Relative Error and Prediction at level 25% criteria. Results indicated that the models provided interesting performances, better if recalibrated with local data. |

Gemignani, Luca
Consider a $n \times n$ lower triangular matrix $L$ whose $(i+1)$-st row is defined by the coefficients of the real polynomial $p_i(x)$ of degree $i$ such that $\{ p_i(x)\}$ is a set of orthogonal polynomials satisfying a standard three-term recurrence relation. If $H$ is a $n\times n$ real Hankel matrix with nonsingular leading principal submatrices, then $\widehat{H}=L HL^T$ will be referred as a strongly nonsingular Hankel matrix with respect to the orthogonal polynomial basis $\{p_i(x)\}$. In this paper we develop an efficient $O(n^2)$ algorithm for the solution of a system of linear equations with a real symmetric coefficient matrix $\widehat{H}$ which is a Hankel matrix with respect to a suitable orthogonal polynomial basis. We then apply our method to the solution of a weighted finite moment problem. |

Gemignani, Luca
By representing the $LR$ algorithm of Rutishauser and its variants in a polynomial setting, we derive numerical methods for approximating either all of the roots or a number $k$ of the roots of minimum modulus of a given polynomial $p(t)$ of degree $n$. These methods share the convergence properties of the $LR$ matrix iteration but, unlike it, they can be arranged to produce parallel and sequential algorithms which are highly efficient expecially in the case where $k<<n$. |

Cappanera, Paola and Frangioni, Antonio
Abstract: We study the coarse-grained parallelization of a Bundle-based cost decomposition algorithm for the solution of linear Multicommodity Min Cost Flow problems, that has already been shown to be effective in sequential. The aim of the study was to investigate if exploitation of the natural parallelism inherent in the cost decomposition method, i.e. the simultaneous solution of the Min Cost Flow subproblems, was enough to obtain acceptable efficiencies without modifying the basic approach: as a result, we obtained an highly portable and flexible code which can be used on different machines. The computational results show that satisfactory efficiencies can be obtained even with many processors on large, difficult MMCF problems with many commodities, that are exactly the class of instances where the sequential code attains the best results. We also show how to exploit a common characteristic of nowadays supercomputer facilities, i.e. the side-to-side availability of massively parallel and powerful vector supercomputers, by implementing an asymmetric decomposition algorithm where each architecture is used for the tasks it is suited most. |

Ambriola, Vincenzo and Cignoni, Giovanni A.
According to the Italian law, monitoring is a form of quality control that must be performed in the enactment of all the contracts of great importance related to the information systems of the Italian Public Administration. In this paper, the authors describe and analyse the aspects of monitoring as a tool to guarantee the quality of software production and, as a consequence, to assure the customer satisfaction. The paper presents the administrative context where monitoring is applied, its purpose and the application areas, the relationship with other forms of quality control. In particular, we emphasise the novelty of monitoring and its characteristic of customer initiative. |

Carraresi, Paolo and Farinaccio, Fernanda and Malucelli, Federico
Abstract: The issue tackled is testing whether a given solution of a quadratic 0-1 problem is optimal. The paper presents an algorithm based on the necessary and sufficient optimality condition introduced by Hirriart-Urruty for general convex problems. A measure of the quality of the solution is provided. Computational results show the applicability of the method. The method is extended to constrained quadratic 0-1 problems such as quadratic assignment and quadratic knapsack . |

Gemignani, Luca
We present an iterative numerical method for solving two classical stability problems for a polynomial $p(x)$ of degree $n$: the Routh-Hurwitz and the Schur-Cohn problems. This new method relies on the construction of a polynomial sequence $\{ p^{(k)}(x) \}_ {k\in {\bf N}}$, $p^{(0)}(x)=p(x)$ , such that $p^{(k)}(x)$ quadratically converges to $(x-1)^p (x+1)^{n-p}$ whenever the starting polynomial $p(x)$ has $p$ zeros with positive real parts and $n-p$ zeros with negative real parts. By combining some new results on structured matrices with the fast polynomial arithmetic, we prove that the coefficients of $p^{(k)}(x)$ can be computed starting from the coefficients of $p^{(k-1)}(x)$ at the computational cost of $O(n\log^2 n)$ arithmetical operations. Moreover, by means of numerical experiments, we show that the $O(n\log n)$ bit precision of computations suffices to support the stated computational properties. In this way, apart from a logarithmic factor, we arrive at the current best upper bound of $O(n^3\log^4 n)$ for the bit complexity of the mentioned stability problems. |

Ambriola, Vincenzo and Telmon, Claudio
In questo lavoro si descrive il progetto del sistema di protezione per il sistema informatico del Dipartimento di Informatica. Nella progettazione si è tenuto conto delle peculiarità di questo tipo di organizzazioni di ricerca. Si tratta di organizzazioni tradizionalmente aperte, il cui principale interesse consiste nella disponibilità delle risorse piu` che nella riservatezza dei dati trattati. Il progetto ha quindi come principale obiettivo la protezione di alcune risorse essenziali dalle quali è possibile ripristinare uno stato corretto sui principali sistemi del Dipartimento. Fa eccezione la sottorete amministrativa, per la quale devono essere garantite anche la riservatezza e l'integrità dei dati. Il progetto, completato nell'aprile 1996, è in fase avanzata di realizzazione. |

Viry, Patrick
We introduce a rewriting implementation of the reduction relation of $\pi$-calculus and prove its correctness. The implementation is based on terms with De Bruijn indices and an explicit substitution operator. The resulting rewrite rules need to be applied modulo a large and complex equational theory, and are only of theoretical interest. Applying the coherence techniques introduced in a previous paper, we transform this specification into an equivalent one that only requires rewriting modulo associativity and commutativity. This latter rewrite system can then be straightforwardly encoded in currently available rewriting-based programming languages. Finally, we sketch a possible application of this implementation as the basis for adding input-output capabilities to such languages. |

Degano, Pierpaolo
Atti della giornata per il venticinquennale del Corso di Laurea in Scienze dell'Informazione Proceedings of the silver jubilee of the Computer Science curriculum |

Luccio, Fabrizio and Pagli, Linda
We make two contributions to the theory of PRAM computation, focusing on the problem of computing the Boolean OR (to which most basic problems can be reconducted). First we critically discuss the ideal PRAM model generally adopted to prove parallel time bounds, and introduce the natural definition of simple PRAM. We prove that log(base 2) n steps are needed to compute the OR of n bits in the CREW variant of this model, and that this bound is tight. This implies that some acclaimed "surprising" results in PRAM theory depend strictly on the properties of the model, and not on the computed function. As a second contribution we show how to simplify the most advanced known lower bound proof for computing the OR. The new proof scheme does not introduce new results, but is aimed to enhancing the comprehension of this difficult subject. We also justify a "full information" functioning, generally adopted without discussion. |

Luccio, Fabrizio and Pagli, Linda
\begin{abstract} We work in $B^n$ and consider sets of $2^m$ points, $m \leq n$, represented in binary {\em balanced} matrices, whose columns contain half 0's and half 1's, and this property repeats recursively in proper submatrices. We introduce the concept of {\em pseudo-cube} of order $m$, that is a subset of $2^m$ points of $B^n$ whose matrix is balanced. A subcube $B^m \subseteq B^n$ is a special case of pseudo-cube and shares most of its properties. For a given pseudo-cube $P$ we define the class ${\cal P}(P)$ of the pseudo-cubes obtained from $P$ by complementing any subset of variables, and show that the elements of ${\cal P}(P)$ are disjoint and tessellate $B^n$. Furthermore, the union of any two pseudo-cube of the same class ${\cal P}$ is a pseudo-cube, and the intersection of two arbitrary pseudo-cubes is either empty or is a pseudo-cube. We then introduce {\em pseudo-products} as Boolean functions that have value 1 in the points of a pseudo-cube $P$. These functions inherit all the properties of pseudo-cubes, and have a compact expression EXP($P$). Given two pseudo-products $P_1$, $P_2$ belonging to the same class $\cal P$, we give an algorithm to construct in linear time the expression of the union EXP($P_1 \cup P_2$) from EXP($P_1$) and EXP($P_2$). Finally we show how a standard procedure to generate a minimal "sum of products" form for a Boolean function $f$ can be extended to generate a minimal "sum of pseudo-products" for $f$. This latter form, based on the representation EXP, is generally much shorter than the corresponding Boolean form. \end{abstract} |

Guerrini, S. and Masini, A.
We propose a new formulation for full (weakening and constants included) multiplicative and exponential (MELL) proof nets, allowing a complete set of rewriting rules to parse them. The recognizing grammar defined by such a rewriting system (confluent and strong normalizing on the new proof nets) gives a correctness criterion that we show equivalent to the Danos-Regnier one. |

Gallo, Giorgio and Scutellà, Maria Grazia
We consider the Minimum cost hyperflow problem, a generalization of the minimum cost flow problems on standard and on generalized graphs. In order to solve this problem, a specialization of the primal Simplex Algorithm, named the Hypergraph Simplex Algorithm, has been recently proposed. Here, the results of a wide experimentation on random hypergraphs and on some kinds of structured hypergraphs are reported. In the experimentation, a C implementation of the Hypergraph Simplex Algorithm has been compare wth the primal version of CPLEX code, by obtaining rather interesting results. In fact, when the hypergraph size increases, our code runs faste than CPLEX; in particular, its solution time is a rather slow increasing fnction of the hypergraph size. The problem under consideration has very interesting applications, which can be modelled and solved in terms of hypergraph flows. These include Multicommodity flows and applications in the area of flexible manufacturing systems, like the One-Dimensional Cutting Stock Problem and Assembly Problems. In particular, it is shown how to model Assembly Problems with a given degree, say k, of parallelism (i.e. when k parallel identical machinesare available for the execution of the operations) in terms of hyperflows. Finally, some Economic Applications are formulated and interpreted in terms of minimum cost hyperflow problems. |

Degano, Pierpaolo and Priami, Corrado
A major drawback of representing concurrent systems as transition systems is the exponential size of their semantic representations. We present a way of obtaining compact transition systems that are mildly exponential in average. The idea is to have (at least) one computation to represent all the ones only differing for the temporal ordering in which concurrent transitions occur. We characterise through axioms the transition systems which may be reduced in size, still preserving non interleaving bisimulation equivalences. We also exemplify our reduction technique on a process calculus whose compact transition system is generated in SOS style. |

Gadducci, Fabio and Montanari, Ugo
In this paper we introduce a model for a wide class of computational systems, whose behaviour can be described by certain rewriting rules. We gathered our inspiration both from the world of term rewriting, in particular from the {\em rewriting logic} framework \cite{Mes92}, and of concurrency theory: among the others, the {\em structured operational semantics} \cite{Plo81}, the {\em context systems} \cite{LX90} and the {\em structured transition systems} \cite {CM92} approaches. Our model recollects many properties of these sources: first, it provides a compositional way to describe both the states and the sequences of transitions performed by a given system, stressing their distributed nature. Second, a suitable notion of typed proof allows to take into account also those formalisms relying on the notions of {\em synchronization} and {\em side-effects} to determine the actual behaviour of a system. Finally, an equivalence relation over sequences of transitions is defined, equipping the system under analysis with a concurrent semantics, where each equivalence class denotes a family of ``computationally equivalent'' behaviours, intended to correspond to the execution of the same set of (causally unrelated) events. As a further abstraction step, our model is conveniently represented using double-categories: its operational semantics is recovered with a free construction, by means of a suitable adjunction. |

Corradini, Andrea and Montanari, Ugo and Rossi, Francesca and Ehrig, H. and Heckel, Reiko and Loewe, M.
The algebraic approaches to graph transformation are based on the concept of gluing of graphs, modelled by pushouts in suitable categories of graphs and graph morphisms. This allows one not only to give an explicit algebraic or set theoretical description of the constructions, but also to use concepts and results from category theory in order to build up a rich theory and to give elegant proofs even in complex situations. In this chapter we start with an overwiev of the basic notions common to the two algebraic approaches, the "double-pushout (DPO) approach" and the "single-pushout (SPO) approach"; next we present the classical theory and some recent development of the double-pushout approach. The next chapter is devoted instead to the single-pushout approach, and it is closed by a comparison between the two approaches. -- This document will appear as a chapter of the "The Handbook of Graph Grammars. Volume I: Foundations" , G. Rozenberg (Ed.), World Scientific. |

Priami, Corrado
Those that follow are the abstracts of the demonstrations at the CONCUR conference held in Pisa in August 1996. The demonstrations cover different aspects of theory of concurrency. They range from process algebras verifiers to model checkers as well as Petri nets support tools. There are also demonstrations on the check of security properties and performance analysis based on process algebra descriptions. Finally, a demonstration of game aspects with CWB is presented. |

Pedreschi, Dino and Ruggieri, Salvatore
Recentely, a new approach to verification of logic and Prolog programs has been proposed, whose main advantage is the possibility to reason on different properties in a unified framework. In this paper, we show an equivalent formulation of that proof method which is well-suited for modular program verification. The notion of modularity taken into account is based on stratification. We show that the proofs of correctness of a program P and a program Q can be re-used in the proof of correctness of P "union" Q with small additional efforts. The main advantage consists in the fact that the proofs for P and Q are developed in a fully independent way. |

Bagnara, Roberto
Many interesting analyses for constraint logic-based languages are aimed at the detection of \emph{monotonic} properties, namely of properties which are preserved as the computation progresses. Our basic claim is that most, if not all, of these analyses can be described within a unified notion of constraint domains. We present a class of constraint systems which allows for a smooth integration within an appropriate framework for the definition of non-standard semantics of constraint logic-based languages. Such a framework is also presented and motivated. We then show how such domains can be built, as well as construction techniques which induce a hierarchy of domains. In particular, we propose a general methodology for domain combination with asynchronous interaction (i.e.~the interaction is not necessarily synchronized with the domains' operations). Following this methodology, interesting combinations of domains can be expressed with all the the semantic elegance of concurrent constraint programming languages. |

Pallottino, Stefano and Schettino, Alberta
Viene descritta l'implementazione di un modello di assegnazione dei passeggeri a una rete di trasporto interurbano con vincoli espliciti di capacit\`a sui veicoli. Inizialmente vengono determinati i cammini attivi rispetto a una soglia \epsilon, detta minimale, e successivamente viene assegnata la domanda a detti cammini. Il codice di calcolo descritto determina, oltre alla soglia minimale, anche il flusso di assegnazione. |

Montangero, Carlo and Semini, Laura
A refinement calculus provides a number of advantages to program development, besides correctness, like clarification of the goals of the program and effective documentation of the design choices. In this paper, we provide evidence that the same advantages can be obtained also in the case of those special programs that are known as enactable process models. The evidence is put forward by means of an example, a small Concurrent Engineering problem inspired to the IWSP7 problem. We assume that the enactment is done by rules in tuple spaces, and use a refinement calculus based on a temporal logic that builds on that of Unity. Finally, we show how the approach leads to seamless integration with existing process engines (the Oikos one in our case). |

Falaschi, M. and Gabbrielli, M. and Marriott, K. and Palamidessi, C.
Concurrent constraint programming (ccp), like most of the concurrent paradigms, has a mechanism of global choice which makes computations dependent on the scheduling of processes. This is one of the main reasons why the formal semantics of ccp is more complicated than the one of its deterministic and local-choice sublanguages. In this paper we study various subsets of ccp obtained by adding some restriction on the notion of choice, or by requiring confluency, i.e. independency from the scheduling strategy. We show that it is possible to define simple denotational semantics for these subsets, for various notions of observables. Finally, as an application of our results we develop a framework for the compositional analysis of full ccp. The basic idea is to approximate an arbitrary ccp program by a program in the restricted language, and then analyze the latter, by applying the standard techniques of abstract interpretation to its denotational semantics. |

Bagnara, Roberto
The subject of groundness analysis for (constraint) logic programs has been widely studied, and interesting domains have been proposed. \textit{Pos} has been recognized has the most suitable domain for capturing the kind of dependencies arising in groundness analysis. Its (by now standard) implementation is based on \emph{reduced ordered binary-decision diagrams} (ROBDDs), a well-known symbolic representation for Boolean functions. Even though several authors have reported positive experiences using ROBDDs for groundness analysis, in the literature there is no reference to the problem of the efficient detection of those variable which are deemed to be ground in the context of a ROBDD. This is not surprising, since most currently implemented analyzers need to derive this information only \emph{at the end} of the analysis and only for presentation purposes. Things are much different when this information is required \emph{during} the analysis. This need arises when dealing with languages which employ some sort of \emph{delay mechanism}, which are typically based on groundness conditions. In these cases, the \emph{na\"\i{}f} approaches are too inefficient, since the abstract interpreter must quickly (and often) decide whether a constraint is delayed or not. Fast access to ground variables is also necessary when aliasing analysis is performed using a domain not keeping track of ground dependencies. In this paper we introduce and study the problem, proposing two possible solutions. The second one, besides making possible the quick detection of ground variables, has also the effect of keeping the ROBDDs as small as possible, improving the efficiency of groundness analysis in itself. |

Brogi, Antonio and Contiero, Simone
Meta-level compositions of object logic programs are naturally implemented by means of meta-programming techniques. Meta-interpreters defining program compositions however suffer from a computational overhead that is due partly to the interpretation layer present in all meta-programs, and partly to the specific interpretation layer needed to deal with program compositions. We show that meta-interpreters implementing compositions of object programs can be fruitfully specialised w.r.t. meta-level queries of the form "Demo(E,G)" , where "E" denotes a program expression and "G" denotes a (partially instantiated) object level query. More precisely, we describe the design and implementation of a declarative program specialiser that suitably transforms such meta-interpreters so as to sensibly reduce --- if not to completely remove --- the overhead due to the handling of program compositions. In many cases the specialiser succeeds in eliminating also the overhead due to meta-interpretation. |

Pedreschi, Dino and Ruggieri, Salvatore
A novel approach to the verification of meta-interpreters is introduced. We apply a general purpose verification method for logic programs, proposed by the authors, to the case study of the Vanilla and other logic meta-interpreters. We extend the standard notion of declarative correctness, and design a criterion for proving correctness of meta-interpreters in a general sense, including amalgamated and reflective meta-interpreters. The contribution of this paper can be summarized as follows: under certain natural assumptions, all interesting verification properties lift up from the object program to the meta-program, including partial correctness, termination, absence of errors, call patterns persistence, correct instances of queries, computed instances of queries. Interestingly, it is possible to establish these results on the basis of purely declarative reasoning, using the mentioned proof method. We believe that the obtained results illustrate the broad applicability of the adopted verification principles. |

Frangioni, Antonio and Gallo, Giorgio
We present a cost decomposition approach to Linear Multicommodity Min Cost Flow problems, based on dualizing the mutual capacity constraints: the resulting Lagrangean Dual is solved by means of a new, specialized Bundle type-dual ascent algorithm. Although decomposition approaches are generally believed not to be competitive, even for the solution of large-scale network structured problems, we present evidence based on extensive computational comparisons that a careful implementation of a decomposition algorithm can outperform several other approaches, especially on problems where the number of commodities is "large" w.r.t. the size of the graph. Our specialized Bundle algorithm is characterised by a new heuristic for the trust region parameter handling, and embeds a custom, fast Quadratic Programming solver that permits the implementation of a Lagrangean variables generation strategy. The Lagrangean Relaxation solver is capable of exploiting the structural properties of the single-commodity Min Cost Flow subproblems to avoid using a "generic" MCF solver whenever it is possible. The proposed approach can be easily extended to handle extra constraints or variants such as the Nonsimultaneous Multicommodity problem. In our computational experience, we also studied the impact on the relative efficiencies of the different approaches tested of some characteristics, such as the number of commodities w.r.t. the total size of the problem. |

Pallottino, Stefano and Scutellà, Maria Grazia
We consider dual approaches for the Shortest Path Tree Problem. After a brief introduction to the problem, we review the most important dual algorithms which have been described in the literature for its solution, and propose a new family of dual ascent algorithms. In these algorithms, "local" and "global" dual updating operations are performed at the nodes in order to enlarge a partial shortest path tree, which at the beginning contains only the root node, until a shortest path tree is found. Several kinds of dual updating operations are proposed, which allow one to derive different dual algorithms from a general schema. One of them, in particular, which is based only on global operations, can be viewed as a dual interpretation of Dijkstra's classical algorithm. Due to their structure, all the proposed approaches are suitable for parallel implementations. They are also suitable for reoptimization approaches, when the computation of shortest paths from different root nodes is required. |

Pasetto, Davide and Vanneschi, Marco
Structured parallel programming is one of the possible solutions to exploit Programmability, Portability and Performance in the parallel programming world. Programming using high level parallel constructs permits the programmer to focus on the development of the parallel algorithms rather than on their low level implementation. The power of this approach stands in the possibility of modeling the performance of the high level constructs and building optimizing template-based compilers. These compilers use formulas describing the performance of language constructs over the target architectures and tune their parametric implementation to achieve high performance figures. In this paper, we describe a set of applications developed using such a programming methodology and show the results obtained. |

Pasetto, Davide and Vanneschi, Marco
Structured parallel programming is one of the possible solutions to exploit Programmability, Portability and Performance in the parallel programming world. The power of this approach stands in the possibility to build an optimizing template--based compiler using low time complexity algorithms. In order to optimize the code, this compiler needs formulas that describe the performance of language constructs over the target architecture. We propose a set of parameters able to describe current parallel systems and build deterministic analytical models for basic forms of parallelism. The analytical model describes construct performance in a parametric way.This can be done by knowing that the compiler exploits a template--based support and giving template implementors guidelines to follow to make actual implementation perform as predicted. |

Petrini, Fabrizio and Vanneschi, Marco
Many theoretical models of parallel computation are based on overly simplistic assumptions on the performance of the interconnection network. For example they assume constant latency for any communication pattern or infinite bandwidth. This paper presents a case study based on the FFT transpose algorithm, which is mapped on two families of scalable interconnection networks, the k-ary n-trees and the k-ary n-cubes. We analyze in depth the network behavior of a minimal adaptive algorithm for the k-ary n-trees and three algorithms for the k-ary n-cubes, each offering an increasing degree of adaptivity: the deterministic routing, a minimal adaptive routing based on Duato's methodology and the Chaos routing, a non-minimal adaptive cut-through version of the hot potato routing. The simulation results collected on topologies with up to 256 processing nodes show that the k-ary n-trees can efficiently support the basic FFT algorithm by factoring the personalized broadcast in a sequence of congestion-free steps. Though k-ary n-cubes are less favored in terms of bisection bandwidth, we can narrow the performance gap between the two interconnection networks by properly interleaving communication and computation. While in the presence of bandwidth-bound patterns the communication latency becomes difficult to predict, the global accepted network bandwidth converges to a fixed value after a stabilization period, though both adaptive algorithms on the cubes suffer from post-saturation problems. |

Cambini, Riccardo and Gallo, Giorgio and Scutellà, Maria Grazia
We consider the capacitated minimum cost flow problem on directed hypergraphs. We define spanning hypertrees so generalizing the spanning tree of a standard graph, and show that, like in the standard and in the generalized minimum cost flow problems, a correspondence exists between bases and spanning hypertrees. Then, we show that, like for the network simplex algorithms for the standard and for the generalized minimum cost flow problems, most of the computations performed at each pivot operation have direct hypergraph interpretations. |

Carboni, Marilisa and Giannotti, Fosca and Manco, Giuseppe and Pedreschi, Dino
We show how a fragment of the logical data language LDL++ can be used to define an abstract machine supporting the essence of active/deductive object-oriented databases. We exhibit a compilation into the mentioned logic language of the basic features of the object model, including the schema definition language, the query language with multiple roles, the basic update operations, and a form of active rules. The target logic language, which is essentially Datalog extended with non-determinism and a form of stratified negation, has been chosen with a twofold aim. On one side, it should provide an abstract implementation level, where the object model is declaratively reconstructed and its semantics clarified. On the other side, the proposed compilation should form the basis of a realistic implementation, as LDL++ can be efficiently executed, and supports real side effects. |

Gemignani, Luca
We present new algorithms using structured matrix methods for manipulating polynomials expressed in generalized form, that is, relative to a polynomial basis $\{p_i(x)\}$ satisfying a three-term recurrence relation. This includes computing the Euclidean remainders as well as the greatest common divisor of two polynomials $u(x)$ and $v(x)$ of degrees less than $n$. The computational schemes involve solving linear systems with nested Hankel matrices represented in the given generalized basis and they reach the optimal sequential complexity of $O(n^2)$ arithmetical operations. |

Gemignani, Luca
The Lanczos method and its variants can be used to solve efficiently the rational interpolation problem. In this paper we present a suitable fast modification of a general look-ahed version of the Lanczos process in order to deal with polynomials expressed in the Chebyshev orthogonal basis. The proposed approach is particularly suited for rational interpolation at Chebyshev points, that is, at the zeros of Chebyshev polynomials. In fact, in this case it overcomes some of the numerical difficulties which limited the applicability of the look-ahed Lanczos process for determining the coefficients both of the numerators and of the denominators with respect to the standard power basis. |

Gemignani, Luca
Fast orthogonalization schemes for m\times n Vandermonde matrices V=(z_i^j), introduced by Demeure, are extended to compute both a $QR$ factorization and an inverse QR factorization of Vandermonde-like matrices P=(p_j(z_i)) where the polynomials p_j(z) satisfy a three-term recurrence relation. In this way we are able to solve least squares (LS) problems of the form \begin{eqnarray*} {\rm minimize } \ ||{\bf b}-P{\bf x}||_2 \end{eqnarray*} using only O(mn) arithmetical operations and O(m) storage. \end{abstract} \bigskip |

Pedrotti, Alberto
In the present study, we consider a computational model similar to the classical RAM machine, except that each instruction performed by the machine may fail with probability $q< 1/ 6e.$ The failures are transient and independent of each other. The definition of this model (which we call ERRAM) follows a previous work \cite{lp}, where we did not allow for addressing errors. We show how an arbitrary RAM program may be transformed into an ERRAM program, working with assigned probability $1- \delta,$ provided that we know the number $T$ of instructions that are executed by the original program, or an upper bound to it. One major tool that we use is redundancy, namely, each operation is repeated a certain number of times, and each stored value is kept in several copies. We derive upper bounds on the amount of redundancy that is needed to achieve the desired confidence. This allows us to bound both the space complexity and the expected time complexity of the resulting ERRAM program, in terms of $T$, $\delta$ and $q.$ |

**NOTE. **Starting January 2015 Technical reports can be inserted in the Open Access repository UnipiEprints.

Once published, they will be visible also in these pages. The importing of data from UnipiEprints is still experimental. The old TR service is no longer maintained.

Data imported from UnipiEprints