Fakultät für Informatik

TU München - Fakultät für Informatik
Software- and Systems Engineering Research Group

TUM
 
 

Agenda

Es sprechen Studenten über ihre abgeschlossenen Diplomarbeiten und Systementwicklungsprojekte.

Am Donnerstag, 09.05.19, ab 10:00 Uhr, im Raum 02.13.008 (an der Galerie der Magistrale):

TimeSpeakerTitel
10:00 - 10:25:Thomas Dornberger (MA, Rainer Niedermayr)Clustering von Testfehlschlägen zur Aufwandsreduzierung bei der Analyse von Fehlerursachen
10:25 - 10:50:Axel Müller (MA, Sebastian Eder)Semantic Similarities in Natural Language Requirements
10:50 - 11:15:Trinh, Phi Long (BA, Diego Marmsoler and Habtom Kahsay Gidey)A Graphical Modelling Language for Dynamic Architectures
11:15 - 11:40:Christian Diemers (BA, Elmar Jürgens und Benjamin Hummel)Semi-automated construction of knowledge matrices for software development

Clustering von Testfehlschlägen zur Aufwandsreduzierung bei der Analyse von Fehlerursachen

Langlaufende, automatisierte Test-Suites testen mehrere Änderungen eines Software-Systems vor einem Release. Dabei verursachen wenige Fehler im System eine hohe Anzahl von Testfehlschlägen. Ein Clustern der Testfehlschläge basierend auf den zugrunde liegenden Fehlern kann zu Zeiteinsparungen im Debugging-Prozess führen. Hierfür wird ein bestehender Failure-Clustering-Ansatz aufgegriffen und erweitert. Das Failure-Clustering kann sich gegenüber simpleren Strategien, welche keine testfallspezifische Abdeckungsinformationen benötigen, durchsetzen: Durch den Einsatz des Failure-Clustering-Verfahrens werden mehr Fehler behoben und somit mehr Testfehlschläge repariert; zeitgleich geht weniger Aufwand verloren, i. e. identische Fehler werden seltener mehrfach behoben. Die durchgeführte Studie erweitert die Anwendbarkeit des Clustering-Verfahrens auf Systemtests, welche zwischen 10-20% aller Methoden eines Systems ausführen. Die gebildeten Cluster sind sehr rein (Cluster-Reinheit > 90%) und vom Verfahren werden im Schnitt mehr Cluster gebildet, als Fehler in einer Software vorliegen. Erste Anhaltspunkte lassen einen Zusammenhang zwischen der Verteilung der Fehler und den Ergebnissen des Clusterns vermuten. Nicht zuletzt lassen sich einzelne Parameter des Failure-Clustering-Ansatzes für spezifische Anwendungsszenarien optimieren.

Semantic Similarities in Natural Language Requirements

Semantic similarity information for requirements can support requirements tracing and reveal important quality defects such as redundancies. Some semantic similarity algorithms have already been applied to requirements in the past, however, more advanced machine learning and deep learning models have not been evaluated in that context so far. Thus, within this thesis, we investigate and compare different types of algorithms for estimating semantic similarities of requirements, covering both relatively simple bag-of-words and more advanced machine learning models. For this purpose, we assemble a dataset of requirements pairs for which we collect semantic similarity assessments from humans. Accordingly, the performances of the applied algorithms are determined based on a comparison of their predictions to the assigned human similarity interpretations. In our experiments, a model which relies on averaging trained word and character embeddings as well as an approach based on character sequence occurrences and overlaps achieve the best performances on our requirements dataset.

A Graphical Modelling Language for Dynamic Architectures

Dynamic architecture is now becoming more and more important but there is still rudimentary support for graphical specification of these architecture patterns. FACTum Studio is a tool to support the formal specification and verification of architectural design patterns. The following thesis presents my work to extend FACTum Studio with the features, which allow users to specify activation and connection annotations, which are constraints on the activation of components and the connections between their ports, and to verify the correctness with the automatic generated Isabelle/HOL theory.

Semi-automated construction of knowledge matrices for software development

Today, software engineering is a complex process. The knowledge within a software company changes frequently, because of changing project requirements and new technologies. Software companies have trouble managing this knowledge because it is mostly tacit. As a study showed, tacit knowledge is besides its employees the main asset of a software company. Because of that, this thesis suggests an approach to make tacit knowledge in the context of software engineering explicit. Our suggested approach uses data generated during the software development process, especially Java source code and version control history data to create a 2D knowledge matrix. To build this matrix, we first extracted topics from the data with Latent Dirichlet Allocation(LDA). Additionally, contributions of each employee in each topic are extracted from version control history data to build the matrix. Further on, we proposed an expert metric, which outputs each topic's set of experts. The whole approach was implemented and applied to the software project "Teamscale". Afterward, we conducted a user study with the developers of that software. One the one hand, the user study revealed really good results for both the topic modeling and the expert suggestions. On the other hand, many entry points for future research exist to improve the results even more.

© Software & Systems Engineering Research Group
Sitemap |  Kontakt/Impressum
Letzte Änderung: 2019-05-06 13:51:01