Fakultät für Informatik

TU München - Fakultät für Informatik
Software- and Systems Engineering Research Group

TUM
 
 

Agenda

Es sprechen Studenten über ihre abgeschlossenen Diplomarbeiten und Systementwicklungsprojekte.

Am Mittwoch, 19.02.20, ab 13:00 Uhr, im Raum Alonzo Church (01.09.014):

TimeSpeakerTitel
13:00 - 13:25Nam Le Duc (MA, Georgios Pipelidis)Visualizing precise spatial information
13:30 - 13:55Jason Bouroutis (MA, Georgios Pipelidis)Optimization of localization data acquisition for an indoor positioning system
14:00 - 14:25Stefan Knilling (BA, Roman Haas)Identification of redundant test cases using clone detection and test case specific coverage
14:30 - 14:55Kevin Huang (BA, Roman Haas)Automated classification of test cases into testing levels

Visualizing precise spatial information

In this current digital transformation era, the Internet of Things (IoT) is a significant key technology. Because IoT accelerates the pace of data creation in both type and volume, there is a necessity of visualization in versatile platforms to give more insights about the data and to aid data-driven decision making processes. This thesis aims to develop visualization for spatial data by using a map, as this is the best tool to effectively represent spatial data by first glance. Map visualization is not limited to displaying a geographical location on the map. It can be used to visualize markers, heat maps or polylines connecting multiple points. Within the scope of this thesis we utilize two different JavaScript mapping libraries namely Leaflet and OpenLayers to present geospatial data, and to enable interaction with the map. In addition, we implement also a map plugin for Grafana, a popular visualization tool which supports a wide range of databases. Besides a map, another metric for visualization can also be useful for time series data, which can be spatial data or another related metrics data like total numbers, distinct value per time intervals. E.g. location of passenger and total numbers of passenger, etc. Those metrics together with spatial data give an adequately meaningful information about the desire targets. For that reason, it is necessary to include also another visualization of metrics such as numbers, tables, bar charts, line charts, transition flows, and so on. Moreover, we concern mostly about real time data. This requires fast response from the aggregation query sent to the database. Along the process, we attempt to improve the query performance by adopting suitable database, which is more appropriate for time series data as well as enables high query performance.

Optimization of localization data acquisition for an indoor positioning system

The thesis investigates the MAC-layer data of a WiFi-based indoor positioning system developed by at the chair of software engineering under the supervision of Prof. Dr. Manfred Broy. The system comprises of several IEEE 802.11 WiFi transceivers, that capture emitted probe requests within a given area. Probe requests are packets transmitted by WiFi-enabled devices when scanning ad-hoc for nearby networks. As these requests are usually captured by more than one of the transceivers, it is possible to infer the location of the transmitting device with a certain level of accuracy out of the differences in the measured signal strengths. In this thesis we investigate and evaluate the frequency with which these requests are transmitted under different occasions for a multitude of mobile devices operating in a range of firmware versions. In addition, we examined methods with which devices within range could eventually be prompted to increase the emission rate of probe requests and other useful frames. In the second part, we examined and evaluated cell phone signaling as an alternative data source for crowd analytics. Therefore, measurements in the GSM frequency domain under varying circumstances were performed. Many users might have WiFi switched off when not at home, thus being untraceable from the aforementioned system. To eliminate such cases, software-defined radio (SDR) antennas tuned in the downlink frequencies of nearby GSM base stations were put into use. GSM was primarily chosen among UMTS and LTE, because of occasional lack of coverage indoors that makes cellphones fallback to GSM. Apart from that, GSM is better documented, unencrypted for the signaling part that is of interest and can be captured and analyzed with inexpensive off-the-self SDR equipment.

Identification of redundant test cases using clone detection and test case specific coverage

Testing is a major part of the entire software development effort. In the course of the development of a software system, the size and complexity of the test suite increases. This often results in more cloned, syntactically redundant test cases. As code clones increase the maintenance effort and lead to faults due to inconsistent changes, detecting and refactoring code clones is important to assure a high level of code quality. However, one observes in practice that code clones in test code in comparison to production code appear less relevant to developers. Clones in test code are thus often ignored. In this work, we therefore propose an approach to determine the relevance of cloned test code before involving developers. By additionally considering method coverage, we evaluate the degree to which test cases execute the same methods. We aim to filter clone findings based on the assumption that pairs of cloned tests that additionally execute a high number of identical methods are considered more relevant for a refactoring. To evaluate our approach, we conducted a case study to examine the test suites of nine open source software systems. In a survey, developers were asked to evaluate the relevance of cloned test cases. We then examined the relation between the developer rated relevance and the number of executed identical methods of cloned pairs of tests and could show a moderate correlation. Even though the predicted relevance of clone pairs based on the number of executed identical methods was mostly in accordance to the estimation of the developers, there were false negative predictions that have to be addressed to reliably filter clone findings in test code.

Automated classification of test cases into test levels

Developers have to find a good balance between short feedback cycles and extensive tests when maintaining a test suite for a software system. Testing patterns such as the Test Automation Pyramid recommend different amounts of tests per testing level to test efficiently on different abstraction levels and to keep the test suite maintainable (i.e. Unit Tests, Integration Tests, System Tests, et cetera). However, without insights into the actual composition of a test suite, that is, the sizes of the testing levels and their characteristics, it is unfeasible to preserve or apply suggested testing patterns. Analyzing large test suites manually is not a viable solution. We present an automated classification approach to classify test cases of a test suite into testing levels, eventually finding the sizes of each level, enabling us to determine the testing pattern. We conduct a case study to apply and validate our approach. In our review of the study results, we find that the approach proposes different testing levels for a test suite than a manual classification by developers. Yet, with the metrics we use in our approach to determine the testing level (numbers of classes executed in a test and test duration) and with regards to the discussion conclusions, we also consider the classification results for the study objects as realistic and solid. The results also allow us to discuss and describe the characteristics of a testing level, which we confirm to be different for different testing levels. We also observe that the execution time of a test case is not as important for the classification than the test’s coverage of the system under test.

© Software & Systems Engineering Research Group
Sitemap |  Kontakt/Impressum
Letzte Änderung: 2020-02-14 17:17:53