Dynamically Reconfiguring Software Microbenchmarks: Reducing Execution Time without Sacrificing Result Quality
Executing software microbenchmarks, a form of small-scale performance tests predominantly used for libraries and frameworks, is a costly endeavor.
Full benchmark suites take up to multiple hours or days to execute, rendering frequent checks, e.g., as part of continuous integration (CI), infeasible.
However, altering benchmark configurations to reduce execution time without considering the impact on result quality can lead to benchmark results that are not representative of the software's true performance.
We propose the first technique to dynamically stop software microbenchmark executions when their results are sufficiently stable.
Our approach implements three statistical stoppage criteria and is capable of reducing Java Microbenchmark Harness (JMH) suite execution times by 48.4% to 86.0%.
At the same time it retains the same result quality for 78.8% to 87.6% of the benchmarks, compared to executing the suite for the default duration.
The proposed approach does not require developers to manually craft custom benchmark configurations;
instead, it provides automated mechanisms for dynamic reconfiguration.
Hence, making dynamic reconfiguration highly effective and efficient, potentially paving the way to inclusion of JMH microbenchmarks in CI.
Tue 10 NovDisplayed time zone: (UTC) Coordinated Universal Time change
17:30 - 18:00 | |||
17:30 2mTalk | Automatically Identifying Performance Issue Reports with Heuristic Linguistic Patterns Research Papers Yutong Zhao Stevens Institute of Technology, USA, Lu Xiao Stevens Institute of Technology, USA, Pouria Babvey Stevens Institute of Technology, USA, Lei Sun Stevens Institute of Technology, USA, Sunny Wong Analytical Graphics, USA, Angel A. Martinez Analytical Graphics, USA, Xiao Wang Stevens Institute of Technology, USA DOI | ||
17:33 1mTalk | Calm Energy Accounting for Multithreaded Java Applications Research Papers Timur Babakol SUNY Binghamton, USA, Anthony Canino University of Pennsylvania, USA, Khaled Mahmoud SUNY Binghamton, USA, Rachit Saxena SUNY Binghamton, USA, Yu David Liu SUNY Binghamton, USA DOI | ||
17:35 1mTalk | Dynamically Reconfiguring Software Microbenchmarks: Reducing Execution Time without Sacrificing Result Quality Research Papers Christoph Laaber University of Zurich, Switzerland, Stefan Würsten University of Zurich, Switzerland, Harald Gall University of Zurich, Switzerland, Philipp Leitner Chalmers University of Technology, Sweden / University of Gothenburg, Sweden DOI Pre-print Media Attached | ||
17:37 1mTalk | Investigating types and survivability of performance bugs in mobile apps Journal First Alejandro Mazuera-Rozo Università della Svizzera italiana & Universidad de los Andes, Catia Trubiani Gran Sasso Science Institute, Mario Linares-Vásquez Universidad de los Andes, Gabriele Bavota USI Lugano, Switzerland | ||
17:39 1mTalk | Testing Self-Adaptive Software with Probabilistic Guarantees on Performance MetricsACM SIGSOFT Distinguished Paper Award Research Papers Claudio Mandrioli Lund University, Sweden, Martina Maggio Saarland University, Germany / Lund University, Sweden DOI Pre-print | ||
17:41 19mTalk | Conversations on Performance / QoS Paper Presentations Alejandro Mazuera-Rozo Università della Svizzera italiana & Universidad de los Andes, Christoph Laaber University of Zurich, Switzerland, Claudio Mandrioli Lund University, Sweden, Timur Babakol SUNY Binghamton, USA, M: Mei Nagappan University of Waterloo |