Dynamically Reconfiguring Software Microbenchmarks: Reducing Execution Time without Sacrificing Result Quality
Executing software microbenchmarks, a form of small-scale performance tests predominantly used for libraries and frameworks, is a costly endeavor.
Full benchmark suites take up to multiple hours or days to execute, rendering frequent checks, e.g., as part of continuous integration (CI), infeasible.
However, altering benchmark configurations to reduce execution time without considering the impact on result quality can lead to benchmark results that are not representative of the software's true performance.
We propose the first technique to dynamically stop software microbenchmark executions when their results are sufficiently stable.
Our approach implements three statistical stoppage criteria and is capable of reducing Java Microbenchmark Harness (JMH) suite execution times by 48.4% to 86.0%.
At the same time it retains the same result quality for 78.8% to 87.6% of the benchmarks, compared to executing the suite for the default duration.
The proposed approach does not require developers to manually craft custom benchmark configurations;
instead, it provides automated mechanisms for dynamic reconfiguration.
Hence, making dynamic reconfiguration highly effective and efficient, potentially paving the way to inclusion of JMH microbenchmarks in CI.
Tue 10 Nov Times are displayed in time zone: (UTC) Coordinated Universal Time change
|17:30 - 17:32|
|Automatically Identifying Performance Issue Reports with Heuristic Linguistic Patterns|
Yutong ZhaoStevens Institute of Technology, USA, Lu XiaoStevens Institute of Technology, USA, Pouria BabveyStevens Institute of Technology, USA, Lei SunStevens Institute of Technology, USA, Sunny WongAnalytical Graphics, USA, Angel A. MartinezAnalytical Graphics, USA, Xiao WangStevens Institute of Technology, USADOI
|17:33 - 17:34|
|Calm Energy Accounting for Multithreaded Java Applications|
Timur BabakolSUNY Binghamton, USA, Anthony CaninoUniversity of Pennsylvania, USA, Khaled MahmoudSUNY Binghamton, USA, Rachit SaxenaSUNY Binghamton, USA, Yu David LiuSUNY Binghamton, USADOI
|17:35 - 17:36|
|Dynamically Reconfiguring Software Microbenchmarks: Reducing Execution Time without Sacrificing Result Quality|
Christoph LaaberUniversity of Zurich, Switzerland, Stefan WürstenUniversity of Zurich, Switzerland, Harald GallUniversity of Zurich, Switzerland, Philipp LeitnerChalmers University of Technology, Sweden / University of Gothenburg, SwedenDOI Pre-print Media Attached
|17:37 - 17:38|
|Investigating types and survivability of performance bugs in mobile apps|
|17:39 - 17:40|
|Testing Self-Adaptive Software with Probabilistic Guarantees on Performance MetricsACM SIGSOFT Distinguished Paper Award|
Claudio MandrioliLund University, Sweden, Martina MaggioSaarland University, Germany / Lund University, SwedenDOI Pre-print
|17:41 - 18:00|
|Conversations on Performance / QoS|