DSpace Repository

SOFTWARE FAULT PREDICTION USING MIXTURE OF EXPERTS

Show simple item record

dc.contributor.author Omer, Aman
dc.date.accessioned 2022-02-07T07:00:14Z
dc.date.available 2022-02-07T07:00:14Z
dc.date.issued 2019-05
dc.identifier.uri http://localhost:8081/xmlui/handle/123456789/15316
dc.description.abstract With increasing applications of software, quality assurance becomes an important phase of software life cycle which makes Software Fault Prediction an essential research topic. Software fault prediction uses existing software metrics, faulty and non-faulty data to predict fault-prone modules. Learning algorithm used for classifying software module plays a vital role hence it also makes the process dependent and vulnerable on single algorithm. To overcome this more than one learning algorithm is being used. This collection of models is called as ensemble. In recent years, many studies have explored different ensemble methods for software fault prediction and it results in significant improvement over individual model. Input space division algorithm for these ensemble techniques are data independent, which certainly affects the model as spatial information could be lost. Training model would perform better if data will be separated depending on the input data. Mixture of Experts (ME) ensemble is a technique which uses soft splitting of the data to train base learners, had been used in various fields such as speech recognition and object detection. The objective of this study is evaluate the performance of ME with different base learners for Software Fault Prediction. 41 publicly available software project datasets from NASA PROMISE and MDP repositories along with Eclipse project data, are used for simulation. ME with decision tree and multi-layer perceptron as base learners are evaluated along with using Gaussian Mixture Model, an unsupervised technique as a gating function. Performance is measured in terms of accuracy, f1-score, precision and recall. Wilcoxon’s statistical test is also performed to evaluate the significant difference of ME. To compare the performance bagging is implemented and results are also compared with individual base model. Results show that while using decision trees as base learners, ME showed improvement in performance and it also performs as good as bagging. When multi-layer perceptron is used as base learner in ME, on average, it shows 7% and 6% improvement in accuracy from individual and bagging model, respectively. Wilcoxon statistical test indicates the significant difference between ME and bagging model for both base learning algorithms. en_US
dc.description.sponsorship INDIAN INSTITUTE OF TECHNOLOGY ROORKEE en_US
dc.language.iso en en_US
dc.publisher I I T ROORKEE en_US
dc.subject Learning Algorithms en_US
dc.subject Mixture of Experts (ME) Ensemble en_US
dc.subject Software Life Cycle en_US
dc.subject Nasa Promise en_US
dc.title SOFTWARE FAULT PREDICTION USING MIXTURE OF EXPERTS en_US
dc.type Other en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record