Fakultät für Informatik

TU München - Fakultät für Informatik
Software- and Systems Engineering Research Group

TUM
 
 

Agenda

Es sprechen Studenten über ihre abgeschlossenen Diplomarbeiten und Systementwicklungsprojekte.

Am Mittwoch, 03.04.19, ab 14:00 Uhr, im Raum „Neumann“ 00.11.038:

TimeSpeakerTitel
14:00 - 14:25:Jannik Fischbach (MA, Maximilian Junker)Extraction of Test-Models from Documents with Semi-formal Business Rules
14:25 - 14:50:Tobias Springer (BA, Maximilian Junker)Automatic test case generation from cause-effect-graphs using SMT solver
14:50 - 15:15:Maria Dorofeeva (MA, Elmar Jürgens und Christian Pfaller)Incremental Identification of Architectural Components from Code
15:15 - 15:40:Alexander Kaserbacher (MA, Elmar Jürgens und Florian Dreier)Empirische Untersuchung der Priorisierung von automatisierten Tests für betriebliche Informationssysteme auf Basis kürzlich durchgeführter Code-Änderungen

Extraction of Test-Models from Documents with Semi-formal Business Rules

Owing to the dynamic forces of change and the increasing complexity of IT systems in today’s business world, software testing has become crucial. At the heart of testing are test cases which are used to check the conformity of the software with regard to requirements. The creation of such test cases is very time-consuming and error-prone, prompting scientists and practitioners increasingly strive to automate this process. For this purpose, Specmate has been developed, whose functionality can be divided into three steps: creating a Cause-Effect-Graph (CEG), defining a test specification and finally creating a test procedure. Whereas for the last two steps first methods have been developed for automation, an approach for the automatic translation of requirements into a CEG is missing. This thesis addresses this problem and focuses on semi-formal requirements resp. pseudo code. It makes three essential contributions: (1) an algorithm for the automatic recognition of semi-formal requirements in documents, (2) an algorithm for the automatic translation of the identified requirements into a CEG, and (3) a study evaluating both algorithms in practice. (1) We present an algorithm that splits a requirements document into individual lines and uses Machine Learning to predict for each line whether it is natural language or pseudo code. We combine a Random Forest with Random Under Sampling and achieve a Recall value of 90% for a total of seven tested requirements documents containing 381 pseudo code lines. The algorithm is thus able to recognize almost all semi-formal requirements. A weak point, however, is a low Precision value of 62 % and the resulting high amount of False Positives. (2) We introduce an algorithm which converts pseudo code into a CEG by executing two steps. In the first step - the syntactic analysis - a parser converts the pseudo code into a syntax tree and decomposes it into causes and effects. The tree is then traversed and transformed into a CEG (semantic analysis). For each cause and effect a corresponding node is created and connected in the CEG. This enables the algorithm to transform any arbitrarily nested pseudo code into a CEG. In addition, due to the use of a Context-free Grammar it can be flexibly applied to any type of pseudo code. However, the outcome of the algorithm depends strongly on the quality of the pseudo code and is insufficient if the pseudo code contains grammatical errors. (3) Our study demonstrates that applying both algorithms leads to a substantial time saving of about 80 % compared to manual CEG generation. This positive effect occurs especially in the case of complex requirements documents that contain multiple nested pseudo code sections. While the translation algorithm was perceived exclusively positively, criticism was expressed towards the detection algorithm. Due to its Recall and Precision value, a small amount of manual work is still required, since it has to be checked whether the algorithm has detected all pseudo code lines. This in turn results in a potential for errors.

Automatic test case generation from cause-effect-graphs using SMT solver

In order to improve the quality of a software product by creating tests, test designers face multiple challenges concerning test-design. When constructing test cases, it is not trivial to find the right amount of test cases, where most of the specified functionality is covered. With the help of automated test-design tools, engineering test cases can be partly automated and simplified. The tool Specmate supports the modelling of cause-effect-graphs (CEGs) based on textual requirements. These graphs are used to automatically generate tests in further steps. A problem with the generation of tests from CEGs is that due to the use of propositional logic, the requirements cannot be reproduced accurately and unrealisable tests can be generated. In this thesis, we present a method to generate tests with the help of predicate logic to eliminate the generation of unrealisable tests. In order to tackle these issues, we analyse and classify each node of the cause-effect-graph before translating it into the constructs of predicate logic. With the help of a Satisfiability Modulo Theories (SMT) solver, a prototype is implemented into the Specmate environment. To verify the elimination of unrealisable tests, we evaluate our prototype against the current test generation approach used in Specmate. The results of this evaluation proof that our developed prototype improves the generation of feasible test cases compared to the current Specmate implementation. Especially when several nodes restrict the same variable, the new approach can handle this and take it into account when generating test cases. Our prototype demonstrates the potential of automatic test case generation with the help of an SMT solver in the context of requirements-based testing.

Empirische Untersuchung der Priorisierung von automatisierten Tests für betriebliche Informationssysteme auf Basis kürzlich durchgeführter Code-Änderungen

Abstract für den Vortrag: Automatisierte Tests sind in der Software-Entwicklung mittlerweile ein unverzichtbares Instrument. Sie liefern Entwicklern und Stakeholdern ein schnelles Feedback über die potentielle Fehlerfreiheit eines Software-Systems. Allerdings steigt durch die wachsende Komplexität dieser Systeme auch die Größe und Laufzeit der Test-Suite, was den Feedback-Zyklus verlangsamt. Test-Impact-Analyse versucht durch Selektion von Tests und priorisierte Testausführung diesen Feedback-Zyklus zu verkürzen. In diesem Vortrag versuchen wir anhand einer praxisnahen Fallstudie den Ansatz der Test-Impact-Analyse zu evaluieren und herauszufinden, inwiefern sich die Theorien und Konzepte in der Praxis anwenden lassen. Als Studienobjekt dient dazu ein großes und komplexes betriebliches Informationssystem. Die Laufzeit der Test-Suite für reale Fehler aus der Vergangenheit im Vergleich zur Laufzeit bei Anwendung von Test-Impact-Analyse steht dabei im Vordergrund.

Incremental Identification of Architectural Components from Code

Abstract für den Vortrag: Software system’s architects use continuous architecture conformance analysis to avoid architectural decay of the system during its life time. Conformance analysis reveals deviations between specified intended architecture and current implementation of the system. However, defining intended architecture requires a lot of time and effort from the architect. This thesis presents a study on algorithms which can help architects to reconstruct components for the system’s intended architecture based on dependencies extracted from the source code. We consider three known clustering algorithms: Adaptive K-Nearest Neighbour, Software Architecture Finder, and Fast Multi-objective Hyper-heuristic Genetic Algorithm. We also introduce incremental and iterative hybrid approaches, which combine known clustering approach with user input related to clusters refinement, and examine their performance. Together with locking defined clusters for adding to them new entities in the algorithm’s next run, iterative hybrid approach based on Software Architecture Finder has higher usability than incremental approach with similarity measure applied in Adaptive K-Nearest Neighbour. The iterative hybrid algorithm requires less iterations and refinements especially when applying the user knowledge on groups of closely related entities before the first run of the algorithm; moreover, all implementation entities with existing dependencies are guaranteed to be processed by the iterative approach.

© Software & Systems Engineering Research Group
Sitemap |  Kontakt/Impressum
Letzte Änderung: 2019-03-26 16:00:44