4th Workshop on
Validation, Analysis and Evolution of Software Tests

March 9, 2021 | co-located with SANER 2021, Virtual

Call for Papers

Aims, scope and topics of interest.

Software projects accumulate large sets of test cases, encoding valuable expert knowledge about the software under test to the extent of many person years. Over time the reliability of the tests decreases, and they become difficult to understand and maintain. Extra effort is required for repairing broken tests and for adapting test suites and models to evolving software systems.

The International Workshop on Validation, Analysis and Evolution of Software Tests (VST) is a unique event bringing together academics, industrial researchers, and practitioners for exchanging experiences, solutions and new ideas in applying methods, techniques and tools from software analysis, evolution and re-engineering to advance the state of the art in test development and maintenance.

The workshop invites high quality submissions related, but are not limited, to:

 ●  Test minimization and simplification

 ●  Fault localization and automated repair

 ●  Change analysis for software tests

 ●  Test visualization

 ●  Test validation

 ●  Documentation analysis

 ●  Bug report analysis

 ●  Test evolution

 ●  Test case generation

 ●  Model-based testing

 ●  Combinations of the topics above


  Download Call for Papers (txt)

Important Dates

Anywhere on earth.

Paper submission deadline (extended)December 21, 2020 AoE

Notifications January 5, 2021

Camera Ready (extended) January 15, 2021

Submission

Instructions and submission site.

We encourage submissions on the topics mentioned above with a page limit of max 8 pages, IEEE format. In addition, we will also allow position papers and tool demo papers of two to four pages.

Papers will be by reviewed by at least three program committee members following a full double-blind review process. Paper selection is based on scientific originality, novelty, and the potential to generate interesting discussions. Accepted papers will be published in the IEEE Digital Library along with the SANER proceedings.

Submission Instructions

  • Papers must not exceed the page limit of 8 pages (including all text, references, appendices, and figures), position papers and tool demos 2-4 pages

  • Papers must conform to the IEEE formatting guidelines for conference proceedings

  • Submissions should be prepared for a full double-blind review process (author names and affiliations should be omitted and references to own work should be in the third person)

  • Papers must be original work that has neither appeared elsewhere for publication nor which is under review for another publication

  • Papers must be submitted in PDF format via EasyChair at https://easychair.org/conferences/?conf=vst2021

  Submit your paper at: VST 2021 EasyChair submission site

Program

Location and schedule.

ZOOM - 16:00-20:00 UTC

Vancouver - 08:00-12:00 PST (UTC-8)
Toronto - 11:00-15:00 EST (UTC-5)
London - 16:00-20:00 GMT (UTC+0)
Vienna - 17:00-21:00 CET (UTC+1)
Helsinki - 18:00-22:00 EET (UTC+2)
. . . or find your location

16:00-16:10 UTC - Welcome    (8:00 PST, 11:00 EST, 16:00 GMT, 17:00 CET, 18:00 EET)
 
16:10-17:10 UTC
Keynote: Fine-Grained Test Analysis
Ali Mesbah


Test cases are meant to form the first line of defence against the introduction of faults in a software product, especially when re-testing modified code. Therefore, assessing the quality of a test suite is crucial to ensure that system under test is correctly implemented, which can help to decrease the overall development and maintenance costs. Test quality can be considered in different perspectives such as adequacy, correctness, and runtime performance of the test suite. Currently, most test analysis techniques consider test methods as the smallest unit of interest. However, in order to effectively analyze tests, we need to look inside tests. In this talk, we will explore "fine-grained test analysis", which dissects test methods and examines them at the test statement level. This also enables us to focus on test assertions, which are the heart of test methods by performing the actual checking. We will discuss how fine-grained analysis can facilitate various test analysis tasks such as test effectiveness assessment, test minimization, and test repair.

close

Fine-Grained Test Analysis (Keynote)
Ali Mesbah

Abstract - Test cases are meant to form the first line of defence against the introduction of faults in a software product, especially when re-testing modified code. Therefore, assessing the quality of a test suite is crucial to ensure that system under test is correctly implemented, which can help to decrease the overall development and maintenance costs. Test quality can be considered in different perspectives such as adequacy, correctness, and runtime performance of the test suite. Currently, most test analysis techniques consider test methods as the smallest unit of interest. However, in order to effectively analyze tests, we need to look inside tests. In this talk, we will explore "fine-grained test analysis", which dissects test methods and examines them at the test statement level. This also enables us to focus on test assertions, which are the heart of test methods by performing the actual checking. We will discuss how fine-grained analysis can facilitate various test analysis tasks such as test effectiveness assessment, test minimization, and test repair.

Bio - Ali Mesbah is an Associate Professor at the University of British Columbia (UBC) in Vancouver, Canada. During 2017-2018, he was a Visiting Research Scientist at Google. His main area of research is software engineering, with emphasis on software analysis and testing, software evolution, and fault localization and repair. He is the recipient of numerous awards including the Killam Faculty Research Prize at UBC, a Discovery Accelerator award from NSERC, two ACM Distinguished Paper Awards at ICSE, an IEEE Distinguished Paper Award at ICST, a Best Paper Award at ESEM, and a Best Paper Award at ICWE. He is an Associate Editor of the IEEE Transactions on Software Engineering and regularly serves on program committees of top software engineering conferences such as ICSE, FSE, and ISSTA.

17:10-17:20 UTC - Chat & Break
 
17:20-17:40 UTC
Regression Test Selection Tool for Python in Continuous Integration Process
Eero Kauhanen, Jukka Nurminen, Matvey Pashkovskiy and Tommi Mikkonen

close

Regression Test Selection Tool for Python in Continuous Integration Process

Abstract - In this paper, we present a coverage-based regression test selection (RTS) approach and a developed tool for Python. The tool can be used either on a developer's machine or on build servers. A special characteristic of the tool is the attention to easy integration to continuous integration and deployment. To evaluate the performance of the proposed approach, several small industry projects are used to evaluate time savings in DevOps/MLOps process. Furthermore, to study precision, mutation testing is applied to three open-source projects, and the results of the execution of full test suites are compared to the execution of a set of tests selected by the tool. It is shown that the missed fault rate of the test selection at file-level and line-level granularities vary between 0-2 % and 16-24 %, respectively. For internal projects, in best cases, testing time was decreased by a factor of four. The high missed fault rate at the line-level granularity is related to the selected basic mutation approach and the result could be improved with advanced mutation techniques. Depending on the target optimization metric (time or precision) in DevOps/MLOps process the error rate could be acceptable or further improved by using file-level granularity based test selection.

Eero Kauhanen

Jukka Nurminen

(University of Helsinki, Finland)

Matvey Pashkovskiy

(F-Secure, Finland)

Tommi Mikkonen

(University of Helsinki, Finland)

 
17:40-18:00 UTC
An Analysis of Testing Scenarios for Automated Driving Systems
Siyuan Liu and Luiz Fernando Capretz


An Analysis of Testing Scenarios for Automated Driving Systems
close

An Analysis of Testing Scenarios for Automated Driving Systems

Abstract - Automated Driving System refers to a vehicle system where hardware and software are collectively capable of on-road operational and tactical functions and such functions involve the detection, recognition, classification of objects and response to events. Many automotive companies are incorporating automated driving into their current R&D and are transforming their business models. To both conventional and disruptive manufacturers, safety is always one of the top priorities. Appropriate verification and validation procedures are needed and should be followed to mitigate unreliability and hazardousness. Sufficient testing scenario should be considered and planned to simulate and cover functional and non-functional requirements. Disengagement ratio serves as an indicator during performance evaluations because analyzing root causes of both technical and non-technical disengagements is pivotal especially during testing strategy planning. Autonomous Vehicle Disengagement Reports and Autonomous Vehicle Collision Reports from the Department of Motor Vehicles (DMV), California, USA are collectively used for the purpose of this research. And the analytical result shows there is no clear relationship between mileage and disengagements. Influencing factors are generated and consolidated from the mentioned reports and are proposed in addition to a Society of Automotive Engineers International (SAE International) standard. Stakeholders will benefit from the presented rationales and consider the suggestive parameters throughout their developing and testing activities. This paper further recommends testing management for automated driving systems, especially test driver management and test routes planning. And the recommendations are in accordance with the analytical results and feedback from KPMG Global Automotive Executive Surveys.

An Analysis of Testing Scenarios for Automated Driving Systems

Siyuan Liu

Luiz Fernando Capretz

(Western University, Canada)

 
18:00-18:20 UTC
AI-Based Enhancement of Test Models in an Industrial Model-Based Testing Tool
Stefan Mohacsi and Michael Felderer

close

AI-Based Enhancement of Test Models in an Industrial Model-Based Testing Tool

Abstract - This paper presents an envisioned approach to AI-based enhancements of test models in the industrial model-based testing tool TEMPPO Designer. Based on an overview of the tool, we present the required data collector, the AI-based data analyzer and ways to integrate the results into TEMPPO Designer.

Stefan Mohacsi

(AtoS, Austria)

Michael Felderer

(University of Innsbruck, Austria)

18:20-18:30 UTC - Chat & Break
 
18:30-18:50 UTC
A Case Study on the Evolution of Configuration Options of a Highly-Configurable Software System
Stefan Fischer

close

A Case Study on the Evolution of Configuration Options of a Highly-Configurable Software System

Abstract - Evolving configurable software poses unique challenges, due to dependencies between the configuration knowledge and the assets that implement the system. Despite a lot of research in the area of configurable software evolution many open challenges subsist, which hamper industrial adoption of different techniques. One such challenge is that the configuration space is often not clearly defined and reasoning over its evolutions and the implication from it is difficult. In this paper, we study the configuration space of the web-based bug tracker Bugzilla, a highly configurable system, replicating the analysis of previous research. In addition, we analyze the evolution of the configuration space over several versions of the system to investigate the changes that were made over the revisions spanning the development of several years and their implications and open challenges.

Stefan Fischer

(Software Competence Center Hagenberg, Austria)

 
18:50-19:10 UTC
Symflower: Generating Unit Tests with Automated Source Code Analysis
Evelyn Haslinger and Markus Zimmermann


Symflower: Generating Unit Tests with Automated Source Code Analysis
close

Symflower: Generating Unit Tests with Automated Source Code Analysis

Abstract - Automated test case generation has been one of the most active research topics in software testing for several decades. Many different approaches and tools have been proposed. In this talk, we present a tool under development for software test case generation, Symflower. Symflower employs advanced language independent symbolic execution to generate test data and analyse programs for potential problems. It can be used locally or as a cloud solution. All that is required is the location of the source code repository and Symflower can fully automatically test the software. The goal of Symflower is to generate a small, comprehensible test suite with high code coverage.

Symflower: Generating Unit Tests with Automated Source Code Analysis

Evelyn Haslinger

Markus Zimmermann

(Symflower GmbH, Austria)

 
19:10-19:30 UTC
Reproducible Crashes: Fuzzing Pharo by Mutating the Test Methods
Serge Demeyer


Reproducible Crashes: Fuzzing Pharo by Mutating the Test Methods
close

Reproducible Crashes: Fuzzing Pharo by Mutating the Test Methods

Abstract - Fuzzing (or Fuzz Testing) is a technique to verify the robustness of a program-under-test. Valid input is replaced by random values with the goal to force the program-under-test into unresponsive states. In this position paper, we propose a white box Fuzzing approach by transforming (mutating) existing test methods. We adopt the mechanisms used for test amplification to generate crash inducing tests, which developers can reproduce later. We provide anecdotal evidence that our approach towards Fuzzing reveals crashing issues in the Pharo environment.

Reproducible Crashes: Fuzzing Pharo by Mutating the Test Methods

Serge Demeyer

(University of Antwerp, Belgium)

19:30-20:00 UTC - Wrap Up

Organization

Chairs and program committee.

Program Committee

Pekka Aho, Open University, The Netherlands

Emil Alégroth, Blekinge Institute of Technology, Sweden

Cyrille Artho, KTH Royal Institute of Technology, Sweden

Árpád Beszédes, University of Szeged, Hungary

Georg Buchgeher, Software Competence Center Hagenberg, Austria

Serge Demeyer, University of Antwerp, Belgium

Michael Felderer, University of Innsbruck, Austria

Angelo Gargantini, University of Bergamo, Italy

Mika Mäntylä, University of Oulu, Finland

Vahid Garousi, Queen's University Belfast, United Kingdom

Dietmar Pfahl, University of Tartu, Estonia

Fiorella Zampetti, University of Sannio, Italy

Contact

Get In touch.

Email us
Top