Fakultät für Informatik

TU München - Fakultät für Informatik
Software- and Systems Engineering Research Group

TUM
 
 

Agenda

Es sprechen Studenten über ihre abgeschlossenen Diplomarbeiten und Systementwicklungsprojekte.

Am Mittwoch, 13.12.17, ab 14:00 Uhr, im Raum Neumann (00.11.038) :

ZeitVortragenderTyp(Betreuer)Titel
14:00 - 14:25:Florian GrögerMA(Elmar Jürgens, Martin Feilkas, Andi Scharfstein, Rainer Niedermayr)Identifizierung von automatisierten Softwaretests mit hohem Wartungsaufwand und geringem Mehrwert

Identifizierung von automatisierten Softwaretests mit hohem Wartungsaufwand und geringem Mehrwert

Software applications are subject to continuous change as for example, features are added and existing code is maintained. This holds true for agile software development in particular. To allow for complete coverage and thorough testing a test suite has to adapt to those changes. While the evolution of test suites is crucial to retain its usefulness, it should not put a burden on software evolution itself. Therefore it is essential to identify tests that cause high maintenance effort but generate little added value. Tests that do not meet quality standards should be refactored or deleted. The task of identifying tests with high maintenance effort and low added value is addressed in the thesis at hand. This thesis presents different approaches to detect added value and high maintenance of tests based on the example of Teamscale, a software solution for the continuous quality analysis of source code. The aim was to establish several metrics to measure maintenance efforts and added value of tests and evaluate the results both on the overall test level and broken down by test types. Furthermore, the issue of flickering tests and test with highmaintenance effort is investigated and common properties are explored. To measure test qualities the following metrics are proposed: (1) For maintenance efforts: Execution time, amount of added lines, amount of false positives. (2) For added value: Unique coverage, amount of true positives. These metrics proved to be reasonably effective at expressing our target measures, although it transpired that establishing a combined metric would not yield valid results for neither added value nor maintenance effort. Our analysis showed that reviewing all tests together is inconclusive and that test types indeed matter. With regard to maintenance effort a clear ranking was apparent: UI tests produced the most maintenance efforts, followed by systemtests, integration tests and unit tests. For added value no such order could be identified. Our investigation into properties of flickering tests discovered an above average execution time among all test types. Flickering UI and system tests showed an augmented amount of added lines and test cases. For tests with high maintenance efforts shared properties could not be detected.

© Software & Systems Engineering Research Group
Sitemap |  Kontakt/Impressum
Letzte Änderung: 2017-12-07 17:44:23