Finally, we discuss how to support the transfer of our results to industry, focusing on addressing the context dependency of our tool support by systematically tuning parameters to a specific operational setting.MuCommander is a lightweight, cross-platform file manager with a dual-pane interface. Our results show that RSSEs for both IA and CIA can help developers navigate large software projects, in terms of locating development teams and software artifacts. In total, more than 60,000 historical issue reports are involved in our studies, originating from the evolution of five proprietary systems for two companies. We evaluate our proposals both by deploying an RSSE in two development teams, and by simulation scenarios, i.e., we assess the correctness of the RSSEs' output when replaying the historical inflow of issue reports. We leverage the volume of issue reports to develop accurate decision support for software evolution. While the sheer number of incoming issue reports might challenge the overview of a human developer, our techniques instead benefit from the availability of ever-growing training data. Our solution approach, grounded on surveys of industry practice as well as scientific literature, is to support navigation by combining information retrieval and machine learning into Recommendation Systems for Software Engineering (RSSE). While IA is fundamental in all large software projects, CIA is particularly important to safety-critical development. IA is the early task of allocating an issue report to a development team, and CIA is the subsequent activity of identifying how source code changes affect the existing software artifacts. In this thesis, we address two tasks involved in issue management: Issue Assignment (IA) and Change Impact Analysis (CIA). Efficient management of incoming issue reports requires the successful navigation of the information landscape of a project. As software systems often evolve over many years, a large number of issue reports is typically managed during the lifetime of a system, representing the units of work needed for its improvement, e.g., defects to fix, requested features, or missing documentation. Software developers in large projects work in complex information landscapes and staying on top of all relevant software artifacts is an acknowledged challenge. Comparison with the state-of-the-art techniques and their variants report that our technique can improve 19% in and 20% in over the state-of-the-art, and can improve 59% of the noisy queries and 39% of the poor queries. Experiments using 5,139 bug reports show that our technique can localize the buggy source documents with 7% - 56% higher 6% - 62% higher and 6% - 62% higher than the baseline technique. In particular, our technique determines whether there are excessive program entities or not in a bug report (query), and then applies appropriate reformulations to the query for bug localization. In this paper, we propose a novel technique - BLIZZARD - that automatically localizes buggy entities from project source using appropriate query reformulation and effective information retrieval. Conversely, excessive structured information (e.g., stack traces) in the bug report might not always help the automated localization either. Recent findings suggest that Information Retrieval (IR)-based bug localization techniques do not perform well if the bug report lacks rich structured information (e.g., relevant program entity names). These results show the applicability of our approach to software projects without history. Over the projects analysed, on average we find one or more affected files in the top 10 ranked files for 76% of the bug reports. Out of 30 performance indicators, we improve 27 and equal 2. We compare our approach to eight others, using their own five metrics on their own six open source projects. The scoring method is based on heuristics identified through manual inspection of a small sample of bug reports. We present a novel approach that directly scores each current file against the given report, thus not requiring past code and reports. However, current state-of-the-art IR approaches rely on project history, in particular previously fixed bugs or previous versions of the source code. Such approaches have the advantage of not requiring expensive static or dynamic analysis of the code. via a bug report, where is it located in the source code? Information retrieval (IR) approaches see the bug report as the query, and the source code files as the documents to be retrieved, ranked by relevance. Bug localisation is a core program comprehension task in software maintenance: given the observation of a bug, e.g.
0 Comments
Leave a Reply. |