Automated Test Case Generation or: How to not write test cases - - PowerPoint PPT Presentation

automated test case generation
SMART_READER_LITE
LIVE PREVIEW

Automated Test Case Generation or: How to not write test cases - - PowerPoint PPT Presentation

Automated Test Case Generation Stefan Klikovits Automated Test Case Generation or: How to not write test cases Stefan Klikovits EN-ICE-SCD Universit e de Gen` eve 28 th September, 2015 Automated Test Reminder: Testing is not easy Case


slide-1
SLIDE 1

Automated Test Case Generation Stefan Klikovits

Automated Test Case Generation

  • r: How to not write test cases

Stefan Klikovits

EN-ICE-SCD Universit´ e de Gen` eve

28th September, 2015

slide-2
SLIDE 2

Automated Test Case Generation Stefan Klikovits

Reminder: Testing is not easy

Credit: HBO

slide-3
SLIDE 3

Automated Test Case Generation Stefan Klikovits

Overview: Automated Testing

Automated . . .

◮ test execution

◮ setup ◮ program execution ◮ capture results

◮ result checking ◮ reporting

slide-4
SLIDE 4

Automated Test Case Generation Stefan Klikovits

Overview: Automated Testing

Automated . . .

◮ test input generation ◮ test selection ◮ test execution ◮ results generation (oracle problem) ◮ result checking ◮ reporting

slide-5
SLIDE 5

Automated Test Case Generation Stefan Klikovits

Random Testing

slide-6
SLIDE 6

Automated Test Case Generation Stefan Klikovits

Random testing

◮ input domains form regions [8] ◮ input represents the region around it ◮ maximum coverage through maximum diversity [2] ◮ but: random input is. . . well, random!

slide-7
SLIDE 7

Automated Test Case Generation Stefan Klikovits

Adaptive Random Testing

NON-Random random testing (?!)

Credit: https://sbloom2.wordpress.com/category/evaluations/

slide-8
SLIDE 8

Automated Test Case Generation Stefan Klikovits

Adaptive Random Testing

NON-Random random testing (?!)

◮ evaluate previous TCs before generating a new one ◮ choose one that is as different as possible ◮ various strategies [1]

slide-9
SLIDE 9

Automated Test Case Generation Stefan Klikovits

ART strategies

dom(x) dom(y)

  • dom(x)

dom(y)

  • dom(x)

dom(y)

  • dom(x)

dom(y)

slide-10
SLIDE 10

Automated Test Case Generation Stefan Klikovits

Criticism

◮ non-determinism ◮ input data problems (e.g. ordering in discrete domains) ◮ computationally expensive (time, memory) ◮ unrealistic scenarios [2]

◮ too high defect rates ◮ no actual SUT

slide-11
SLIDE 11

Automated Test Case Generation Stefan Klikovits

Combinatorial Testing

slide-12
SLIDE 12

Automated Test Case Generation Stefan Klikovits

Combinatorial Testing

◮ Idea: test all possible input (combinations) ◮ large number of TCs ◮ (slight) improvement: Equivalence classes!

◮ (5 < uint a < 10) ⇒ {[0..5], [6..9], [10..maxInt]}

◮ still large TC sets

◮ 5 parameters – 3 EC each ⇒ 243 TC ◮ plus boundary values, exceptions, etc.

slide-13
SLIDE 13

Automated Test Case Generation Stefan Klikovits

Orthogonal/Covering Arrays

Orthogonal arrays (OA)

◮ test each pair/triple/... of parameters ◮ restriction: every τ-tuple has to be tested equally often

Covering arrays (CA)

◮ . . . every pair (τ-tuple) has to appear at least once ◮ logarithmic growth [3]

slide-14
SLIDE 14

Automated Test Case Generation Stefan Klikovits

Working principle

Scenario:

◮ 3 Parameters (OS, Browser, Printer) ◮ 2 values each ({W, L}, {FF, CH}, {A, B})

OS Browser Printer W FF A W FF B W CH A W CH B L FF A L FF B L CH A L CH B

Table: Pairwise testing

slide-15
SLIDE 15

Automated Test Case Generation Stefan Klikovits

Working principle

Scenario:

◮ 3 Parameters (OS, Browser, Printer) ◮ 2 values each ({W, L}, {FF, CH}, {A, B})

OS Browser Printer W FF A W FF B W CH A W CH B L FF A L FF B L CH A L CH B

Table: Pairwise testing

slide-16
SLIDE 16

Automated Test Case Generation Stefan Klikovits

Criticism

◮ computationally expensive ◮ NP-hard [3] ◮ test case prioritisation

Industry measurements:

◮ 70 % pairwise; 90 % threeway [6] ◮ 97 % of medical devices with pairwise tests [5]

slide-17
SLIDE 17

Automated Test Case Generation Stefan Klikovits

Examples

Scenario 2:

◮ 4 parameters - 3 values each ◮ exhaustive tests: 34 = 81 ◮ TCs to cover all pairs: 9

slide-18
SLIDE 18

Automated Test Case Generation Stefan Klikovits

Examples

Scenario 3:

◮ 10 parameters - 4 values each ◮ exhaustive tests: 410 = 1,048,576 ◮ TCs to cover all pairs: 29

slide-19
SLIDE 19

Automated Test Case Generation Stefan Klikovits

Symbolic Execution

slide-20
SLIDE 20

Automated Test Case Generation Stefan Klikovits

Symbolic execution

◮ build execution tree ◮ use symbols as input ◮ sum up Path constraints (PCs) ◮ use constraint solvers

slide-21
SLIDE 21

Automated Test Case Generation Stefan Klikovits

Example Execution tree and Path constraints [7]

int x, y; 1: if ( x > y ) { 2: if ( y - x > 0 ) 3: assert ( false ); }

x : A, y : B PC : true x : A, y : B PC : A>B x : A, y : B PC : A>B ∧ B−A>0 2 x : A, y : B PC : A>B ∧ B−A≤0 2 1 x : A, y : B PC : A≤B 1

slide-22
SLIDE 22

Automated Test Case Generation Stefan Klikovits

Difficult constraints? Concolic Execution!

Idea:

◮ use symbolic values as long as possible ◮ switch to real values when necessary

Example:

Figure: Example of Concolic Execution [4]

slide-23
SLIDE 23

Automated Test Case Generation Stefan Klikovits

Real life application

Whitebox fuzzying [4]

  • 1. start with well-formed inputs
  • 2. record all the individual constraints along the execution

path

  • 3. one by one negate the constraints, solve with a

constraint solver and execute new paths Properties:

◮ highly scalable ◮ focus on security vulnerabilities (buffer overflows) ◮ no need for a test oracle (check for system failures &

vulnerabilities) Found one third of all bugs discovered in Windows 7!

slide-24
SLIDE 24

Automated Test Case Generation Stefan Klikovits

Model-based TC generation

Credit: http://formalmethods.wikia.com/wiki/Centre for Applied Formal Methods

◮ automatic/manual model generation ◮ three approaches ◮ Axiomatic | FSM | LTS

slide-25
SLIDE 25

Automated Test Case Generation Stefan Klikovits

Model-based test case generation

TC selection:

◮ offline/online test selection

Modeling notations (textual & graphical):

◮ Scenario-, State-, Process-oriented

slide-26
SLIDE 26

Automated Test Case Generation Stefan Klikovits

Criticism

◮ state space explosion ◮ complex model generation ◮ defining a “good” model is non-trivial ◮ requires knowledge of modeling

slide-27
SLIDE 27

Automated Test Case Generation Stefan Klikovits

Summary

◮ (Adaptive) Random Testing (BB):

cheap generation; non-deterministic; (hit & miss)

◮ Combinatorial Testing (BB):

expensive; many TCs

◮ Symbolic/Concolic Execution (WB):

problematic constraints; path explosion

◮ Model-based (WB)

not “just” coding; need a “good” model (complex); state space

slide-28
SLIDE 28

Automated Test Case Generation Stefan Klikovits

Summary

◮ (Adaptive) Random Testing (BB):

cheap generation; non-deterministic; (hit & miss)

◮ Combinatorial Testing (BB):

expensive; many TCs

◮ Symbolic/Concolic Execution (WB):

problematic constraints; path explosion

◮ Model-based (WB)

not “just” coding; need a “good” model (complex); state space

slide-29
SLIDE 29

Automated Test Case Generation Stefan Klikovits

Summary

◮ (Adaptive) Random Testing (BB):

cheap generation; non-deterministic; (hit & miss)

◮ Combinatorial Testing (BB):

expensive; many TCs

◮ Symbolic/Concolic Execution (WB):

problematic constraints; path explosion

◮ Model-based (WB)

not “just” coding; need a “good” model (complex); state space

slide-30
SLIDE 30

Automated Test Case Generation Stefan Klikovits

Summary

◮ (Adaptive) Random Testing (BB):

cheap generation; non-deterministic; (hit & miss)

◮ Combinatorial Testing (BB):

expensive; many TCs

◮ Symbolic/Concolic Execution (WB):

problematic constraints; path explosion

◮ Model-based (WB)

not “just” coding; need a “good” model (complex); state space

slide-31
SLIDE 31

Automated Test Case Generation Stefan Klikovits

References

[1] Saswat Anand, Edmund K. Burke, Tsong Yueh Chen, John Clark, Myra B. Cohen, Wolfgang Grieskamp, Mark Harman, Mary Jean Harrold, and Phil Mcminn. An orchestrated survey of methodologies for automated software test case generation.

  • J. Syst. Softw., 86(8):1978–2001, August 2013.

[2] Andrea Arcuri and Lionel C. Briand. Adaptive random testing: an illusion of effectiveness? In ISSTA, pages 265–275, 2011. [3] Charles J. Colbourn. Combinatorial aspects of covering arrays. Le Matematiche (Catania), 58, 2004.

slide-32
SLIDE 32

Automated Test Case Generation Stefan Klikovits

References (cont.)

[4] Patrice Godefroid. Test generation using symbolic execution. In Deepak D’Souza, Telikepalli Kavitha, and Jaikumar Radhakrishnan, editors, FSTTCS, volume 18 of LIPIcs, pages 24–33. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2012. [5] D. R. Kuhn, D. R. Wallace, and A. M. Gallo, Jr. Software fault interactions and implications for software testing. IEEE Trans. Softw. Eng., 30(6):418–421, June 2004. [6] D. Richard Kuhn and Michael J. Reilly. An investigation of the applicability of design of experiments to software testing. In Proceedings of the 27th Annual NASA Goddard Software Engineering Workshop (SEW-27’02), SEW ’02,

slide-33
SLIDE 33

Automated Test Case Generation Stefan Klikovits

References (cont.)

pages 91–, Washington, DC, USA, 2002. IEEE Computer Society. [7] Corina S. Pasareanu and Willem Visser. A survey of new trends in symbolic execution for software testing and analysis. STTT, 11(4):339–353, 2009. [8] L.J. White and E.I. Cohen. A domain strategy for computer program testing. IEEE Transactions on Software Engineering, 6(3):247–257, 1980.