Please use this identifier to cite or link to this item:
http://localhost:8081/xmlui/handle/123456789/15316
Title: | SOFTWARE FAULT PREDICTION USING MIXTURE OF EXPERTS |
Authors: | Omer, Aman |
Keywords: | Learning Algorithms;Mixture of Experts (ME) Ensemble;Software Life Cycle;Nasa Promise |
Issue Date: | May-2019 |
Publisher: | I I T ROORKEE |
Abstract: | With increasing applications of software, quality assurance becomes an important phase of software life cycle which makes Software Fault Prediction an essential research topic. Software fault prediction uses existing software metrics, faulty and non-faulty data to predict fault-prone modules. Learning algorithm used for classifying software module plays a vital role hence it also makes the process dependent and vulnerable on single algorithm. To overcome this more than one learning algorithm is being used. This collection of models is called as ensemble. In recent years, many studies have explored different ensemble methods for software fault prediction and it results in significant improvement over individual model. Input space division algorithm for these ensemble techniques are data independent, which certainly affects the model as spatial information could be lost. Training model would perform better if data will be separated depending on the input data. Mixture of Experts (ME) ensemble is a technique which uses soft splitting of the data to train base learners, had been used in various fields such as speech recognition and object detection. The objective of this study is evaluate the performance of ME with different base learners for Software Fault Prediction. 41 publicly available software project datasets from NASA PROMISE and MDP repositories along with Eclipse project data, are used for simulation. ME with decision tree and multi-layer perceptron as base learners are evaluated along with using Gaussian Mixture Model, an unsupervised technique as a gating function. Performance is measured in terms of accuracy, f1-score, precision and recall. Wilcoxon’s statistical test is also performed to evaluate the significant difference of ME. To compare the performance bagging is implemented and results are also compared with individual base model. Results show that while using decision trees as base learners, ME showed improvement in performance and it also performs as good as bagging. When multi-layer perceptron is used as base learner in ME, on average, it shows 7% and 6% improvement in accuracy from individual and bagging model, respectively. Wilcoxon statistical test indicates the significant difference between ME and bagging model for both base learning algorithms. |
URI: | http://localhost:8081/xmlui/handle/123456789/15316 |
metadata.dc.type: | Other |
Appears in Collections: | MASTERS' THESES (CSE) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
G29150.pdf | 2.34 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.