GI/Dagstuhl Research Seminar
January 12-15, 2004, Schloss
We will give an detailed account to formal approaches of testing as
well as relations to other validation techniques like model checking.
There are several schools of formal testing. We will give an
account to some of them providing the basis for a comparison. We will
show the similarities and differences in each approach. In general, we
concentrate on so-called black-box testing. In this setting,
the system under test is not given explicitly but may only be
analyzed using interaction in terms of input and output actions.
During the last 50 years, a rich testing theory has been developed for
testing finite state machines that produce outputs on state
transitions after receiving inputs (also known as Mealy
machines). Typical problems addressed in the domain of finite state
machine testing are
- Homing Sequence: Determine the final state after the
test: Rezeq Naim (38) & Sandberg (37)
- State Identification: Identify the unknown initial state:
- State Verification: Verify that a machine is in a
particular state Bjoerklund (36)
- Machine Verification/Fault Detection/Conformance Testing: Check
whether two given machines are equivalent: Gargantini
- testing nondeterministic systems: Supratik Mukhopadhyay
Further keywords in this area are transition tour, postman,
unique input/output sequence etc.
An overview of testing finite state machines is given in
[LY96]. It will serve as a starting point for the chapters
dealing with testing finite state machines.
The theory of testing transition systems was initiated by the
work of De Nicola and Hennessy in [MN83]. The basic goal
is to test whether one transition system is an implementation
of another one. Therefore, several notions of implementation
relations were introduced and studied. A usually infinite number of
test cases is identified that characterizes the implementation
relation. The work was later extended towards transition systems in
which input and output actions are distinguished.
In [BT01], an annotated bibliography for this
approach is given which points out the relevant work in this area.
Recently, extensions to cover real-time or probabilistic systems have
been studied, for example in [SVD01] and
To turn this approach into a practical method, a finite number of test
cases has to be selected out the infinite set describing the
implementation relation. The goal is to choose test cases that cover
the infinite set, as good as possible. This approach is studied in
This model also assumes that the tester communicates with the system
under test is a synchronous manner. In practice, however, the two
systems often interact via queues. [TV92] discusses this
- [T2.1] Preorder relations: Bruda (44)
- [T2.2] Test generation algorithms based on preorders: Tschaen (13)
- [T2.3] I/O-automata based testing: vd Bijl (8) & Peureux (20)
- [T2.4] Testing theory for timed systems: Briones (34) & Röhl
- [T2.5] Testing theory for probabilistic systems: Wolf (29) & Tepper (39)
In the setting of model-based testing, a formal model of an
implementation is given. The goal is to check whether the
implementation conforms to the model. Usually, testing is restricted
to certain interesting test cases for which conformance is
checked. Here, a test case is a sequence of input and (expected)
output values. Test cases are usually determined in terms of
test specifications that specify which part of the system is to
be tested. Often, test specifications are given in form of
Typical models are extended finite state machines in which states are
enriched with data or time variables. Thus, the model is usually not
concrete in the sense that every state of the system is represented as
a single state in the model. It is abstract so that abstract test
cases require instantiation into to concrete ones.
There is a huge number on papers on specifying and generating test
cases. A selection of these methods describing basic ideas will be
presented. The focus is on techniques rather than on which test cases
to specify. One approach, for example, is to specify coverage criteria
by means of temporal logic. Test cases are then witnesses or counter
examples for the underlying formula and the model. Among model
checkers, test case generation technology includes symbolic execution
or deductive theorem proving.
This yields the following chapters. By no means are the references
intended to be comprehensive, and the authors are encouraged to add
- technology: model checking
[ABM98,HLSU02], symbolic execution
LTS exploration [RWNH98,dZ99], theorem
(6) & Samer (9) & Lucio (25)
- Real-Time and Hybrid Systems Testing
[PS97,Nie00,HPPS03,Weg01]: Hessel (2) & Berkenkoetter (22) & Kirner (33)
- Coverage based test case generation
as compared to functional test case specifications and random
and user profiles: Gaston (18) & Seifert (17) & Mueck (15)
- Tools for test case generation. Some of the tools have
been compared in [Gog01,Har02]:
Belinfante (23) & Frantzen (28) & Schallhart (10)
- Synchronous languages: for Lustre (GaTeL [MA00],
Lurette [RWNH98], Lutess [dZ99]),
for AutoFocus [PPS$^+$03]
- process algebras, state machines
[Ura92,BDA96], SDL: TGV
[FJJV96], STG [CJRZ02,RdJ00], TVEDA
[CGPT96], TorX [VTB$^+$00], AutoLink
- Case studies: chip cards
of OS and exception handling [FHP02]:
Prenninger (48) & El-Ramly (14) & Horstmann
To allow different testing tools to be employed in practice, standards
have to be developed for test execution, i.e., the interaction of a
testing tool with the system under test. TTCN-3 [Wil01]
is a language for describing the test setup as well as test cases. A
compiler then produces a program that runs the given tests cases:
ISO 9646 as well as IEC 1131 and related standards should be taken into
account: Zhen Ru Dai (24).
- In the domain of run-time verification, one
studies the question how to check properties of a system while
executing it. In [HR02], for example, monitors
for temporal logic formulas are generated that check whether a
property is satisfied or violated. Obviously, there is a strong
relationship between model-checking, run-time verification, and
testing which will be elaborated in this part1: Colin (21) & Mariani (31) & Omitola (26)
- Model checking is one of the key techniques in
verifying hard- but also software systems. The formal theory of
model checking is well developed in contrast to the domain of
testing. It is, however, likely that many achievements in the domain
of model checking are valuable for formal testing as well. Recently,
several attempts have been made. Adaptive model checking
[GPY02] combines learning [Ang87] of
automata and model checking to study an underlying system: Berg (1)
& Raffelt (43)
P. Ammann, P. Black, and W. Majurski.
Using model checking to generate tests from specifications.
In Proc. 2nd IEEE Intl. Conf. on Formal Engineering Methods,
pages 46-54, 1998.
Learning regular sets from queries and counterexamples.
Information and Computation, 75:87-106, 1987.
S. Burton, J. Clark, and J. McDermid.
Automatic generation of tests from Statechart specifications.
In Proc. Formal Approaches to Testing of Software, pages
C. Bourhfir, R. Dssouli, and E. Aboulhamid.
Automatic test generation for EFSM-based systems.
Technical Report IRO 1043, University of Montreal, August 1996.
On the coverage of partial validations.
In Maurice Nivat, Charles Rattray, Teodor Rus, and Giuseppe Scollo,
editors, Algebraic Methodology and Software Technology (AMAST '93),
Proceedings of the Third International Conference on Methodology and Software
Technology, Workshops in Computing, pages 245-252. Springer, 1994.
Ed Brinksma and Jan Tretmans.
Testing transition systems: An annotated bibliography.
Lecture Notes in Computer Science, 2067:187-??, 2001.
M. Clatin, R. Groz, M. Phalippou, and R. Thummel.
Two approaches linking test generation with verification techniques.
In A. Cavalli and S. Budkowski, editors, 8th International
Workshop on Protocol Test Systems. Chapman & Hall, 1996.
D. Clarke, T. Jéron, V. Rusu, and E. Zinovieva.
Automated Test and Oracle Generation for Smart-Card Applications.
In Proc. E-smart, pages 58-70, 2001.
D. Clarke, T. Jéron, V. Rusu, and E. Zinovieva.
STG: A Symbolic Test Generation Tool.
In Proc. TACAS'02, pages 470-475, 2002.
A System to Generate Test Data and Symbolically Execute Programs.
IEEE Trans. on Software Engineering, SE-2(3):215-222,
J. Dushina, M. Benjamin, and D. Geist.
Semi-Formal Test Generation with Genevieve.
In Proc. DAC, 2001.
J. Duran and S. Ntafos.
An Evaluation of Random Testing.
IEEE Trans. on Software Engineering, SE-10(4):438-444, July
L. du Bousquet and N. Zuanon.
An overview of Lutess, a specification-based tool for testing
In Proc. 14th IEEE Intl. Conf. on Automated SW Engineering,
Loe M. G. Feijs, Nicolae Goga, Sjouke Mauw, and Jan Tretmans.
Test selection, trace distance, and heuristics.
In Ina Schieferdecker, Harmut könig, and Adam Wolisz, editors,
Proceedings of the IFIP 14th International Conference on Testing
Communicating Systems - TestCom 2002, volume 210 of IFIP Conference
Proceedings. Kluwer, 2002.
E. Farchi, A. Hartman, and S. Pinter.
Using a model-based test generator to test for standard
IBM Systems Journal, 41(1):89-110, 2002.
J.-C. Fernandez, C. Jard, T. Jéron, and C. Viho.
Using on-the-fly verification techniques for the generation of test
In Proc. 8th Intl. Conf. on Computer-Aided Verification, July
L. Fournier, A. Koyfman, and M. Levinger.
Developing an Architecture Validation Suite--Application to the
In Proc. 36th ACM Design Automation Conf., pages 189-194,
Comparing TorX, autolink, TGV and UIO test algorithms.
Lecture Notes in Computer Science, 2078:379-??, 2001.
Alex Groce, Doron Peled, and Mihalis Yannakakis.
Adaptive model checking.
In Tools and Algorithms for the Construction and Analysis of
Systems (TACAS'02), volume 2280 of Lecture Notes in Computer Science,
pages 357-??, 2002.
Partition testing versus random testing: the influence of
IEEE Trans. on Software Engineering, 25(5):661-674, 1999.
Model Based Test Generation Tools.
H. Hong, I. Lee, O. Sokolsky, and H. Ural.
A Temporal Logic Based Theory of Test Coverage and Generation.
In Proc. TACAS'02, 2002.
S. Helke, T. Neustupny, and T. Santen.
Automating Test Case Generation from Z Specifications with
In Proc. 10th Intl. Conf. of Z Users, pages 52-71, 1997.
Springer LNCS 1212.
Symbolic Testing and the DISSECT Symbolic Evaluation System.
IEEE Trans. on Software Engineering, SE-3(4):266-278, July
An Evaluation of the Effectiveness of Symbolic Testing.
Software--Practice and Experience, 8:381-397, 1978.
G. Hahn, J. Philipps, A. Pretschner, and T. Stauner.
Prototype-based Tests for Hybrid Reactive Systems.
In Proc. RSP'03, pages 78-86, 2003.
K. Havelund and G. Rosu.
Synthesizing monitors for safety properties.
In J.-P. Katoen and P. Stevens, editors, Proc. TACAS '02,
Int. Conf. on Tools and Algorithms for the Construction and
Analysis of Systems, volume 2280 of Lecture Notes in Computer Science,
pages 324-356. Springer Verlag, 2002.
D. Hamlet and R. Taylor.
Partition Test Does Not Inspire Confidence.
IEEE Trans. on Software Engineering, 16(12):1402-1411,
B. Koch, J. Grabowski, D. Hogrefe, and M. Schmitt.
AutoLink--A Tool for Automatic Test Generation from SDL
In Proc. IEEE Intl. Workshop on Industrial Strength Formal
Specification Techniques, October 1998.
Symbolic Execution and Program Testing.
Communications of the ACM, 19(7):385-394, July 1976.
B. Legeard and F. Peureux.
Génération de séquences de tests à partir d'une
spécification B en PLC ensembliste.
In Proc. Approches Formelles dans l'Assistance au
Développement de Logiciels, pages 113-130, June 2001.
D. Lee and M. Yannakakis.
Principles and methods of testing finite state machines - a survey.
Proc. IEEE, 84(8):1090-1126, 1996.
B. Marre and A. Arnould.
Test Sequence Generation from Lustre Descriptions: GATEL.
In Proc. 15th IEEE Intl. Conf. on Automated Software Engineering
(ASE'00), Grenoble, 2000.
M.Hennessy and R. De Nicola.
Testing equivalences for processes.
In Proc. ICALP '83, 1983.
Speification and Test of Real-Time Systems.
PhD thesis, Department of Computer Science, Aalborg University, 2000.
On Random and Partition Testing.
In Proc. Intl. Symp. on Software Testing and Analysis, pages
J. Philipps, A. Pretschner, O. Slotosch, E. Aiglstorfer, S. Kriebel, and
Model-based test case generation for smart cards.
Proc. Formal Methods for Industria Critical Systems, 2003.
Compositional generation for MC/DC test suites.
In Proceedings of TACoS'03, pages 1-11, 2003.
Electronic Notes in Theoretical Computer Science 82(6).
J. Peleska and M. Siegel.
Test Automation of Safety-Critical Reactive Systems.
South African Computer Journal, 19:53-77, 1997.
V. Rusu, L. du Bousquet, and T. Jéron.
An Approach to Symbolic Test Generation.
In Proc. Integrated Formal Methods, 2000.
S. Rayadurgan and M. Heimdahl.
Coverage Based Test-Case Generation using Model Checkers.
In Proc. 8th Intl. Conf. and Workshop on the Engineering of
Computer Based Systems, pages 83-93, 2001.
C. Ramamoorthy, S. Ho, and W. Chen.
On the Automated Generation of Program Test Data.
IEEE Trans. on Software Engineering, SE-2(4):293-300,
P. Raymond, D. Weber, X. Nicollin, and N. Halbwachs.
Automatic testing of reactive systems.
In Proc. 19th IEEE Real-Time Systems Symposium, 1998.
J. Shen and J. Abraham.
An RTL Abstraction Technique for Processor Micorarchitecture
Validation and Test Generation.
J. Electronic Testing: Theory&Application, 16(1-2):67-81,
Testing Cyclic Software Components of Reactive Systems on the
Basis of Formal Specifications.
PhD thesis, TU Berlin, 1998.
M.I.A. Stoelinga and F.W. Vaandrager.
A testing scenario for probabilistic automata.
In Proc. ICALP '2003, International Colloquium on
Automata, Lnaguages, and Programming, 2003.
Jan Springintveld, Frits Vaandrager, and Pedro R. D'Argenio.
Testing timed automata.
Theoretical Computer Science, 254(1-2):225-257, March 2001.
J. Tretmans and L. Verhaard.
A queue model relating synchronous and asynchronous communication.
In Proceedings of the International Conference on
Protocol Specification, Testing and Verification. North-Holland, 1992.
Formal methods for test sequence generation.
Computer Communications, 15(5):311-325, June 1992.
R. de Vries, J. Tretmans, A. Belinfante, J. Feenstra, L. Feijs, S. Mauw,
N. Goga, L. Heerink, and A. de Heer.
Côte de Resyste in Progress.
In Progress 2000 - Workshop on Embedded Systems, pages
141-148, October 2000.
J. Wegener, K. Buhr, and H. Pohlheim.
Automatic Test Data Generation for Structural Testing of Embedded
Software Systems by Evolutionary Testing.
In Proc. Genetic and Evolutionary Computation Conference, July
Evolutionärer Test des Zeitverhaltens von
PhD thesis, Humboldt Universität Berlin, 2001.
ETSI testing activities and the use of TTCN-3.
Lecture Notes in Computer Science, 2078:123-??, 2001.
- ... part1
- Note that
the issue is to compare the power of the different validation
techniques, rather than studying the question whether one domain
can be used for the other one. Such questions, like how model
checkers can be used to generate test cases, are studied in the
appropriate testing chapters.