AUN Digital Repository

An Adaptive and Scalable Ontology for Explainable Deep Classifier in Disease Surveillance

Show simple item record

dc.contributor.author Jillahi, Kamal Bakari.
dc.contributor.author Iorliam, Aamo.
dc.contributor.author Mwajim, Gabriel Mshelia.
dc.contributor.author Anas, Shuaibu.
dc.date.accessioned 2024-10-11T11:04:49Z
dc.date.available 2024-10-11T11:04:49Z
dc.date.issued 2024-11-06
dc.identifier.issn 3027-0650
dc.identifier.uri http://hdl.handle.net/123456789/687
dc.description In the context of Artificial Intelligence (AI), explainability refers to the ability of a model to provide details or reasons to make clear how and why it made a specific decision or prediction. Explainability in AI systems boosts trust, transparency, and accountability by making them more understandable to users, decision-makers, and regulators. It ensures fairness, detects biases, and improves model reliability. In fields like healthcare, security, finance, and law, explainability is crucial for validating AI's safety and ethical use. en_US
dc.description.abstract This research aims to improve explainability of predictions in disease surveillance by leveraging an ontology-based model. A Markov Decision Process (MDP) and a Q-Learning algorithms were proposed to update two public Ontologies making them both dynamic and Scalable in order to enhance the quality of explanations generated on the output of a deep learning classifier used for Morbidity/Mortality prediction of Malaria disease. The study uses Atlas Malaria dataset, OBO Malaria Ontology, SWEET Ontology and a Recurrent Neural Network thus, integrating domain-specific knowledge and data. The study compares the proposed model with a static model based on fidelity, interpretability, relevance, ROC and AUC metrics. The proposed model achieves a fidelity score of 0.92, compared to 0.75 for the static model, along with a higher interpretability score of 4.7/5 versus 3.9/5 for the static approach. Additionally, the relevance score for the dynamic ontology is 0.88, outperforming the static model’s 0.72. The dynamic ontology also exhibits superior classification performance, with an AUC of 0.9532, significantly higher than the static model’s AUC of 0.7968. These results demonstrate the dynamic ontology’s effectiveness in improving both model performance and explanation quality in case studied. en_US
dc.language.iso en en_US
dc.publisher [American University of Nigeria] en_US
dc.relation.ispartofseries American University of Nigeria, 2nd International Conference Proceeding;
dc.title An Adaptive and Scalable Ontology for Explainable Deep Classifier in Disease Surveillance en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search AUN Digital Repository


Advanced Search

Browse

My Account