Invited Speakers

Keynote

Learning About the Change: An Adaptive Approach to Automata Learning

Mohammad Reza Mousavi, University of Leicester, UK

Abstract: Automata learning is a technique to learn behavioural models from black-box systems. Variability and evolution are inherent to much of the modern autonomous systems and hence, new sorts of automata learning techniques are needed to learn about variability-intensive and evolving systems. In this talk, we first present the basic principles of automata learning and then report on two novel techniques for learning variability-annotated models as well as efficient learning for evolving systems by identifying the commonalities and differences in the learning process.

This talk is based on joint work with several people, and in particular, with Diego Damasceno and Adenilso Simao.

Tutorial

Testing, Runtime Verification and Automata Learning

Martin Leucker, University of Lübeck, Germany

Abstract: Testing and runtime verification are both verification techniques for checking whether a system is correct. The essential artefacts for checking whether the system is correct are actual executions of the system, formally words. Such a set of words should be rep- resentative for the systems behavior.

In the field of automata learning (or grammatical inference) a formal model of a system is derived based on exemplifying behavior. In other words, the question is addressed what model fits to a given set of words.

In testing, typically, the system under test is examined on a finite set of test cases, formally words, which may be derived manually or automatically. Oracle-based testing is a form of testing in which an oracle, typically a manually developed piece of code, is attached to the system under test and employed for checking whether a given set of test cases passes or fails.

In runtime verification, typically, a formal specification of the correct behavior is given from which a so-called monitor is synthesised and used for examining whether the behavior of the system under test, or generally the system to monitor, adheres to such a specification. In a sense, the monitor acts as a test oracle, when employed in testing.

From the discussion above we see that testing, runtime verification, and learning automata share similarities but also differences. The main artefacts used for the different methods are formal specifications, models like automata, but especially sets of words, on which the different system descriptions are compared, to eventually obtain a verdict whether the system under test is correct or not.

In this tutorial we recall the basic ideas of testing, oracle-based testing, model-based testing, conformance testing, automata learning and runtime verification and elaborate on a coherent picture with the above mentioned artefacts as ingredients. We mostly refrain from technical details but concentrate on the big picture of those verification techniques.