HEAL DSpace

Alzheimer’s disease diagnosis using a multimodal approach with 3D MRI and PET

Αποθετήριο DSpace/Manakin

Εμφάνιση απλής εγγραφής

dc.contributor.author Βοζινάκη, Ανθή-Μαρία el
dc.contributor.author Vozinaki, Anthi-Maria en
dc.date.accessioned 2025-07-30T07:28:40Z
dc.date.available 2025-07-30T07:28:40Z
dc.identifier.uri https://dspace.lib.ntua.gr/xmlui/handle/123456789/62223
dc.identifier.uri http://dx.doi.org/10.26240/heal.ntua.29919
dc.rights Default License
dc.subject Alzheimer’s Disease en
dc.subject Multimodal en
dc.subject Neuroimaging data en
dc.subject Convolutional Neural Networks en
dc.subject Mixture of Experts en
dc.title Alzheimer’s disease diagnosis using a multimodal approach with 3D MRI and PET en
heal.type bachelorThesis
heal.classification Μηχανική Μάθηση el
heal.language en
heal.access free
heal.recordProvider ntua el
heal.publicationDate 2025-02-24
heal.abstract Alzheimer’s disease is an irreversible brain disease that severely damages human thinking and is the seventh leading cause of death worldwide. Early diagnosis plays an important part es- pecially at the Mild Cognitive Impairment stage, where timely intervention can help slow its progression before it advances to AD. Neuroimaging data, like MRI and PET scans, can help detect brain changes early by providing structural and functional brain changes related to the disease. However, despite the availability of various imaging modalities for the same patient, the development of multi-modal models leveraging these modalities remains underexplored. This thesis aims to address this gap by proposing and evaluating classification models using 3D MRI and amyloid PET scans in a multimodal framework. We first employ a 3D Convolutional Neural Network, followed by three fusion techniques: feature concatenation, Gated Multimodal Unit, and Gated Self-Attention. To further improve classification performance and computational efficiency, we integrate a Mixture of Experts model, which dynamically selects the most relevant subnetworks for each prediction. Finally, we utilize Grad-CAM to visualize disease-related regions, ensuring model interpretability. The results show that the GMU-based model achieves 95.47% accuracy and specificity of 96.73% in the NC vs. AD classification task, outperforming state-of-the-art approaches. Additionally, the model successfully locates disease-related regions in both MRI and PET scans, with different activation patterns in each modality, according to Grad-CAM analysis. This result supports the effectiveness of a multimodal strategy in the diagnosis of AD by confirming the complementary nature of MRI and PET. en
heal.advisorName Askounis, Dimitrios en
heal.committeeMemberName Marinakis, Vangelis en
heal.committeeMemberName Psarras, John en
heal.academicPublisher Εθνικό Μετσόβιο Πολυτεχνείο. Σχολή Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών. Τομέας Ηλεκτρικών Βιομηχανικών Διατάξεων και Συστημάτων Αποφάσεων el
heal.academicPublisherID ntua
heal.numberOfPages 96 σ. el
heal.fullTextAvailability false


Αρχεία σε αυτό το τεκμήριο

Αυτό το τεκμήριο εμφανίζεται στην ακόλουθη συλλογή(ές)

Εμφάνιση απλής εγγραφής