Please use this identifier to cite or link to this item:
http://localhost:8081/jspui/handle/123456789/18866Full metadata record
| DC Field | Value | Language |
|---|---|---|
| dc.contributor.author | Singh, Darpan | - |
| dc.date.accessioned | 2026-02-05T10:48:29Z | - |
| dc.date.available | 2026-02-05T10:48:29Z | - |
| dc.date.issued | 2024-06 | - |
| dc.identifier.uri | http://localhost:8081/jspui/handle/123456789/18866 | - |
| dc.guide | Pant, Millie | en_US |
| dc.description.abstract | Explainable AI (XAI) is becoming increasingly important in healthcare due to its potential to improve transparency, trust, and effectiveness of AI systems. XAI helps healthcare providers understand how AI systems arrive at their conclusions, making it easier to trust and adopt these technologies. Many healthcare regulations, such as GDPR in Europe, require explanations for automated decisions, especially those affecting patient outcomes. XAI can explain the reasoning behind diagnostic suggestions, allowing clinicians to validate and potentially discover new insights. By providing clear rationale for treatment recommendations, XAI helps clinicians tailor interventions to individual patient needs. Explainable models help clinicians communicate AI-driven insights to patients in an understandable way, enhancing patient engagement and adherence to treatment plans. XAI can highlight biases in AI models, enabling healthcare professionals to address and correct these biases, leading to fairer outcomes for all patient groups. By revealing the decision-making process, XAI can help researchers identify new patterns and correlations in medical data, potentially leading to breakthroughs in understanding diseases and conditions. Insights into AI decision-making processes can guide researchers in refining models to improve performance and reliability. This dissertation delves into the realm of explainable AI within Medical Imaging, focusing on its application in brain tumor detection and classification. Employing GradCam, SHAP and LIME techniques on specialized datasets, the study aims to unravel the intricate relationship between model explainability and accuracy. Initially centering on brain tumor detection, the research progresses to encompass classification tasks, seeking to illuminate the impact of explainable AI methods on the precision of the models. | en_US |
| dc.language.iso | en | en_US |
| dc.publisher | IIT, Roorkee | en_US |
| dc.title | EXPLAINABLE AI IN MEDICAL IMAGING | en_US |
| dc.type | Dissertations | en_US |
| Appears in Collections: | MASTERS' THESES (MFSDS & AI) | |
Files in This Item:
| File | Description | Size | Format | |
|---|---|---|---|---|
| 22566008_DARPAN SINGH.pdf | 19.24 MB | Adobe PDF | View/Open |
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
