dc.contributor.author | Δεληγιαννάκη, Φωτεινή | el |
dc.contributor.author | Deligiannaki, Foteini | en |
dc.date.accessioned | 2022-07-29T11:21:29Z | |
dc.date.available | 2022-07-29T11:21:29Z | |
dc.identifier.uri | https://dspace.lib.ntua.gr/xmlui/handle/123456789/55546 | |
dc.identifier.uri | http://dx.doi.org/10.26240/heal.ntua.23244 | |
dc.rights | Αναφορά Δημιουργού-Μη Εμπορική Χρήση-Όχι Παράγωγα Έργα 3.0 Ελλάδα | * |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/3.0/gr/ | * |
dc.subject | Robustness | en |
dc.subject | Adversarial machine learning | en |
dc.subject | Fourier transform | en |
dc.subject | Image classification | en |
dc.subject | Convolutional neural networks | en |
dc.subject | Μετασχηματισμός Fourier | el |
dc.subject | Ταξινόμηση εικόνων | el |
dc.subject | Συνελικτικά νευρωνικά δίκτυα | el |
dc.subject | Ανταγωνιστική μηχανική μάθηση | el |
dc.subject | Ισχυρά νευρωνικά δίκτυα | el |
dc.title | Τρωτά σημεία και αντοχή των Συνελικτικών Νευρωνικών Δικτύων έναντι ανταγωνιστικών επιθέσεων στο χωρικό και φασματικό πεδίο | el |
dc.contributor.department | Τεχνητής Νοημοσύνης και Συστημάτων Μάθησης | el |
heal.type | bachelorThesis | |
heal.classification | Επιστήμη Υπολογιστών | el |
heal.language | en | |
heal.access | free | |
heal.recordProvider | ntua | el |
heal.publicationDate | 2022-02-24 | |
heal.abstract | The constant rise in the capabilities of Artificial Intelligence has led to its application in numerous domains even when safety is a critical component. In the area of computer vision, Convolutional Neural Networks (CNNs) achieve impressive results in image classification, segmentation and object detection. It has been proven though that CNNs are easily manipulated and fooled by very small and carefully crafted corruptions, imperceptible to the human eye. These corruptions known as adversarial attacks have raised the question of the robustness of modern CNNs to images deviating from the training data distribution and pose an important threat to their reliability. A variety of attack as well as defence and detection methods have been proposed but to this date models are still vulnerable. The purpose of this thesis is to examine the success rate of common adversarial attack algorithms as well as the defence method of adversarial training in image classification tasks. Specifically, we start by using common CNN architectures trained on the CIFAR-10 and 350 Bird Species datasets as victim models. We implement two attacks, namely the white-box C&W and PGD methods and manage to fool our models into misclassifying perturbed images with a success rate of up to 100%. In order to then investigate ways to defend our models we use adversarial training with the TRADES algorithm and significantly drop attack success rates, but also show the existing trade-off between accuracy and robustness. Lastly, since current detection methods propose a strong distinction between the spectral representation of adversarial examples and benign images, we explore the characteristics of adversarial attacks as well as training methods in the Fourier domain. Through this analysis we observe that perturbations are influenced by a number of factors related to the dataset, training algorithm and model architecture and aspire to bring forward the Fourier domain properties that differentiate robust from non-robust models and their vulnerabilities. | el |
heal.advisorName | Σταφυλοπάτης, Ανδρέας-Γεώργιος | el |
heal.advisorName | Σιόλας, Γεώργιος | el |
heal.committeeMemberName | Κόλλιας, Στέφανος | el |
heal.committeeMemberName | Στάμου, Γιώργος | el |
heal.committeeMemberName | Σταφυλοπάτης, Ανδρέας-Γεώργιος | el |
heal.academicPublisher | Εθνικό Μετσόβιο Πολυτεχνείο. Σχολή Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών | el |
heal.academicPublisherID | ntua | |
heal.numberOfPages | 102 σ. | el |
heal.fullTextAvailability | false |
Οι παρακάτω άδειες σχετίζονται με αυτό το τεκμήριο: