Developing and Evaluating Objective Termination Criteria for Random Testing
Random testing is a software testing technique through which programs are tested by generating and executing random inputs. Because of its unstructured nature, it is difficult to determine when to stop a random testing process. Faults may be missed if the process is stopped prematurely, and resources may be wasted if the process is run too long. In this article, we propose two promising termination criteria, “All Equivalent” (AEQ) and “All Included in One” (AIO), applicable to random testing. These criteria stop random testing once the process has reached a code-coverage-based saturation point after which additional testing effort is unlikely to provide additional effectiveness. We model and implement them in the context of a general random testing process composed of independent random testing sessions. Thirty-six experiments involving GUI testing and unit testing of Java applications have demonstrated that the AEQ criteria is generally able to stop the process when a code coverage equal or very near to the saturation level is reached, while AIO is able to stop the process earlier in cases it reaches the saturation level of coverage. In addition, the performance of the two criteria has been compared against other termination criteria adopted in the literature.
Fri 13 NovDisplayed time zone: (UTC) Coordinated Universal Time change
08:00 - 08:30 | |||
08:00 2mTalk | Baital: An Adaptive Weighted Sampling Approach for Improved t-wise Coverage Research Papers Eduard Baranov Université Catholique de Louvain, Belgium, Axel Legay Université Catholique de Louvain, Belgium, Kuldeep S. Meel National University of Singapore, Singapore DOI | ||
08:03 1mResearch paper | Cost Measures Matter for Mutation Testing Study Validity Research Papers Giovani Guizzo University College London, UK, Federica Sarro University College London, UK, Mark Harman University College London, UK DOI Pre-print | ||
08:05 1mTalk | Developing and Evaluating Objective Termination Criteria for Random Testing Journal First Porfirio Tramontana Department of Electrical Engineering and Information Technologies, University of Naples Federico II, Italy, Domenico Amalfitano University of Naples Federico II, Nicola Amatucci Department of Civil, Architectural and Environmental Engineering, University of Naples Federico II, Italy, Atif Memon Apple Inc., Anna Rita Fasolino Federico II University of Naples | ||
08:07 1mTalk | Efficient Binary-Level Coverage Analysis Research Papers M. Ammar Ben Khadra TU Kaiserslautern, Germany, Dominik Stoffel TU Kaiserslautern, Germany, Wolfgang Kunz TU Kaiserslautern, Germany DOI Pre-print Media Attached | ||
08:09 1mTalk | Efficiently Finding Higher-Order Mutants Research Papers Chu-Pan Wong Carnegie Mellon University, USA, Jens Meinicke Carnegie Mellon University, USA, Leo Chen Carnegie Mellon University, USA, João Paulo Diniz Federal University of Minas Gerais, Brazil, Christian Kästner Carnegie Mellon University, USA, Eduardo Figueiredo Federal University of Minas Gerais, Brazil DOI | ||
08:11 1mTalk | Selecting Fault Revealing Mutants Journal First Thierry Titcheu Chekam University of Luxembourg (SnT), Mike Papadakis University of Luxembourg, Luxembourg, Tegawendé F. Bissyandé University of Luxembourg, Luxembourg, Yves Le Traon University of Luxembourg, Luxembourg, Koushik Sen University of California at Berkeley | ||
08:13 17mTalk | Conversations on Testing 3 Paper Presentations Chu-Pan Wong Carnegie Mellon University, USA, Eduard Baranov Université Catholique de Louvain, Belgium, Giovani Guizzo University College London, UK, M. Ammar Ben Khadra TU Kaiserslautern, Germany, Porfirio Tramontana Department of Electrical Engineering and Information Technologies, University of Naples Federico II, Italy, Thierry Titcheu Chekam University of Luxembourg (SnT), M: Marcel Böhme Monash University, Australia |