LIFO,
Bât. IIIA, Université d'Orléans
(Amphithéâtre Jacques HERBRAND)
Automated Verification = Graphs, Automata, and Logic.
Moshe Y. VARDI (Rice University, USA).
In automated verification one uses algorithmic techniques to
establish the correctness of the design with respect to a given
property. Automated verification is based on a small number of
key algorithmic ideas, tying together graph theory, automata
theory, and logic. In this self-contained talk I will describe how
this "holy trinity" gave rise to automated-verification tools.
Proving convergence of self-stabilizing
systems using first-order rewriting and regular languages.
Joffroy BEAUQUIER, Frédéric MAGNIETTE (LRI - Orsay, CNRS), Béatrice BERARD, Laurent
FRIBOURG(LSV, ENS-Cachan). Orateur : L. Fribourg.
In the framework of self-stabilizing systems, the convergence proof is
generally done by exhibiting a measure that strictly decreases until a
legitimate configuration is reached. The discovery of such a measure is
very specific and requires a deep understanding of the studied
transition system. In contrast we propose here a simple
method for proving convergence, which regards self-stabilizing systems
as string rewrite systems, and adapts a procedure initially designed by
Dershowitz for proving termination of string rewrite systems. In order
to make the method terminate more often, we also propose an adapted
procedure that manipulates ``schemes'', i.e. regular
sets of words, and incorporate a process of scheme generalization. The
interest of the method is illustrated on several nontrivial examples.
1st-order transition logics for PA processes.
Denis LUGIEZ (LIM, Univ. Marseille-II), et Philippe SCHNOEBELEN (LSV, ENS-Cachan).
Orateur : P. Schnoebelen.
We show decidability of several first-order logics based upon the
reachability predicate in PA. The main tool for this result is the
recognizability by tree automata of the reachability relation
between PA-processes. The tree-automata approach and the
transition logics we use allow a smooth and general treatment of
parameterized model checking for PA. Our approach can be extended
so that it handles a quite general notion of costs of PA-steps. In
particular, when costs are Parikh images of traces, we show
decidability of a transition logic extended by some form of
first-order reasoning over costs.
Weakly Regular Relations.
Pierre RETY (LIFO - Orléans) and Helmut SEIDL (Univ. de TRIER).
Orateur : P. Réty.
We present an extension of regular relations (regular tree languages over a
product alphabet) that preserves most of the known properties (except the
complement).
Expression de méthodes de validation.
Transformation en logique propositionnelle ou monadique.
Jean-Paul BODEVEIX, Mamoun FILALI (IRIT-Toulouse), et Daniel Le BERRE
(University of Newcastle, Australia).
Orateur : M. Filali.
Nous présentons l'outil FMona permettant d'exprimer des méthodes de
validation, des problèmes finis ou paramétrés et la vérification de ces
problèmes selon l'une des méthodes spécifiées.
L'ordre supérieur de FMona nous a permis d'exprimer différentes
techniques de validation dans différentes logiques: propositionnelle,
monadique. Une spécification FMona peut donc être traduite pour des
outils disponibles basés sur ces logiques(Mona pour la logique monadique,
Sato ou Hugo pour la logique propositionnelle).
L'outil FMona a été utilisé pour exprimer différentes méthodes
de validation (par recherche de point fixe, par abstraction, avec
accélération) et pour valider différents problèmes (exclusion mutuelle,
terminaison, cohérence mémoire).
On insistera plus particulièrement sur les techniques de
traduction et les méthodes d'accélération de convergence.
Spécification et vérification de la confidentialité, en présence de
déclassification d'information.
John MULLINS (Dépt. de Génie Electrique et Informatique, Université de
MONTREAL, Canada).
Nous présenterons une notion de confidentialité dans le
contexte de flot d'information admissible. Peu de travaux sont
connus sur ce sujet jusqu'a présent, alors que la confidentialité
vue comme absence de flot d'information a été l'objet de recherches
intensives. C'est probablement que la confidentialité en présence
de déclassification est un "truc plutot subtil" [Goguen84], et
pourtant du point de vue des applications, l'absence de flot
d'information est beaucoup trop restrictive. Nous proposons une
définition formelle de flot d'information admissible basée sur
l'équivalence de traces telle que developpée dans le contexte de
la théorie de la concurrence, ainsi qu'une méthode de vérification
cohérente et complète exprimée en termes d'automates communicants.
Des résultats relatifs à la compositionnalité de la propriété par
rapport aux opérateurs usuels sur les automates communicants sont
également présentés.
A Challenging Case Study from France Telecom R&D.
Jean-François MONIN (France Telecom R&D, Lannion)
The talk will give some lessons learned from the verification of a
small but non trivial telecom real-time device using different
approaches, namely inductive proofs and model checking. Both
approaches are based on a common framework, which is an extension of
well established temporized automata called p-automata. We conclude
with current perspectives.
Débat : Quelle portée pratique pour les outils formels ?
(Animation : Vardi, Filali, Fribourg, Monin, Rusinowitch,
Schnoebelen,...)
1. People have been designing software and hardware for fifty years
without using formal methods. Are formal methods really necessary? Are
they desirable? How can the research community convince designers to use
formal methods?
2. Formal methods are "at their best" applicable to abstract models of
real-life systems.
How best can we (really hope to) bridge the gap between these two ?
3. Not all domains seem suitable for formal methods. Is it possible to
isolate the domains of applications better suited for formal methods than
others ?
4. An alleged dichotomy is between deductive and algorithmic methods.
Is this dichotomy real?
Does it matter to practitioners?
From Verification to Synthesis.
Moshe Y. VARDI (Rice University, USA).
One of the most significant developments in the area of design
verification over the last decade is the development of
of algorithmic methods for verifying temporal specification
of finite-state designs. A frequent criticism against this
approach, however, is that verification is done after significant
resources have already been invested in the development of the design.
Since designs invariably contains errors, verification simply
becomes part of the debugging process. The critics argue that the
desired goal is to use the specification in the design development
process in order to guarantee the development of correct designs.
This is called design synthesis. In this talk I will review 40 years
of research on the synthesis problem and show how the
automata-theoretic approach can be used to solve it.
Updatable timed automata.
Antoine PETIT (LSV, ENS-Cachan).
In classical timed automata, as defined by Alur and Dill [AD90,AD94] and
widely since studied, the only operation allowed to modify the clocks is
the reset operation. For instance, a clock can neither be set to a non-null
constant value, nor be set to the value of another clock, nor, in a
non-deterministic way, to some value lower or higher than a given constant.
In this work we study in details such updates which can be very useful for
modeling purposes. We characterise in a thin way the frontier between decidable
and undecidable. Our main contributions are the following :
- We exhibit many classes of updates for which emptiness is
undecidable. A surprising result is that these classes depend on the
clock constraints that are used -- diagonal-free or not -- whereas it is
well known that these two kinds of constraints are equivalent for classical
timed automata.
- We propose a generalization of the region automaton proposed by Alur
and Dill to handle with larger classes of updates. The complexity of the
decision procedure remains Pspace-complete.
Set Constraints and Set-based Analysis.
Jean-Marc TALBOT (MPI-Sarrebreucken).
Static analysis aims to extract run-time properties from program
sources. These information can be used for compilation optimizations
or debugging. Set-based analysis is a particular technique of static
analysis that computes an approximation of the program semantics.
The main ingredient of set-based analysis is set constraints. These
constraints are first-order formulas describing relations between sets
of trees. Set-based analysis can be expressed has a two-step method:
the first one extracts from the source some set constraints; the
second step is the resolution of these constraints. This resolution
phase can be either a satisfiability test or the computation of a
particular solution.
Different classes of set constraints (according to programming
paradigms) have been proposed and studied. In this talk, we will focus
on three different classes called respectively atomic, definite and
co-definite.
Roughly speaking, atomic set constraints is the class of constraints
corresponding to imperative programs. We will first briefly show how
the satisfiability test can be performed for that class and then
discuss about the entailment problem. This entailment test is of
particular interest for the extracting phase: it provides a
subsumption test that may avoid redundancies in the set of extracted
constraints.
We will then define the classes of definite and co-definite set
constraints. These classes are mainly used for set-based analysis of
logic programs. We will see that they are in fact closely related
provided a small extension by means of a new set operators called
membership expression. We will then discuss about a satisfiability
test for both classes.
For logic programs, set-based analysis has been rephrased in [FSVY91]
in terms of syntactical relaxation of programs. This relaxation
transforms a logic program into another one but of simpler
form. These simpler logic programs are called uniform programs. In
[FSVY91], the authors wondered whether a stronger
relaxation yielding unary uniform programs would improve
time-complexity for set-based analysis. In other words, is
there any gain to turn from sets of trees to sets of words ?
We show that unfortunately the problem over sets of words is as
difficult as the one over sets of trees.
Caractérisation par automtes de Büchi d'une classe de propriétés LTL
vérifiables modulairement.
Pierre-Alain MASSON (LIFC - Besançon).
La vérification par model-checking de propriétés dynamiques pour des
systèmes réactifs se heurte au problème de l'explosion combinatoire.
Afin de répondre à ce problème, nous proposons une technique de Vérification
Modulaire. L'idée est de découper un graphe d'accessibilité à explorer
en un ensemble de sous-graphes appelés modules, et de vérifier les propriétés
sur chaque module de manière séparée et indépendante.
Dans cet exposé, nous nous focalisons sur l'aspect théorique d'une telle
démarche, en se posant notamment la question de savoir quelles sont les
propriétés vérifiables de manière modulaire. Nous codons les propriétés
LTL par des automates de Büchi, et l'étude de ces automates permet pour
certaines propriétés de prouver que si elles sont vraies sur chaque module,
alors elles sont vraies globalement. Grâce aux automates de Büchi, nous
définissons une classe de propriétés LTL vérifiables modulairement.
Une telle démarche suppose de savoir produire un "bon" découpage modulaire,
c'est à dire un découpage tel que qu'une propriété vraie globalement ait
de grandes chances d'être vraie au niveau de chaque module. Nous proposons
d'utiliser la démarche de raffinement de B pour le découpage modulaire,
ce qui offre l'avantage de produire un découpage sémantique du graphe
d'accessibilité.