Per Erik Strandberg /cv /kurser /blog

Dynamic Test Case Prioritization in Industrial Test Result Datasets by Alina Torbunova, Per Erik Strandberg, and Ivan Porres has just been accepted and will be presented at AST 2024 [1].

Pre-print (PDF) at arXiv: [2]

This paper uses The Westermo Test Results Dataset and explores the idea of, in addition to prioritization test cases before running the suite (as we write about in Experience Report Suite Builder), also re-prioritize tests based on what we learn while running a test suite.


Regression testing is an important approach in software development, because it checks that new features do not affect the already existing ones. It is important that developers can be notified about possible faults quickly, which motivates Test Case Prioritization, the focus of our research. We assume that we use a static prioritization algorithm that can be of any type. We want to include an additional dynamic prioritization algorithm which means that tests will be rearranged during the execution of a test cycle and for that we utilize such features as verdicts of these tests. We propose to use a conditional probability dynamic algorithm for this. We evaluate our solution on three industrial datasets and utilize Average Percentage of Fault Detection for that. The main findings are that our dynamic prioritization algorithm can: a) be applied with any static algorithm that assigns a priority score to each test case b) can improve the performance of the static algorithm if there are correlations between test cases c) can also reduce the performance of the static algorithm, but only when the static scheduling is performed at a near optimal level.

Keywords regression testing; Test Case Prioritization (TCP); dynamic prioritization

This page belongs in Kategori Publikationer.