Monash_PhD_thesis.pdf (66.12 MB)
Adversarial Regularisation and Knowledge Distillation for Deep Learning Tasks.
thesisposted on 2024-01-08, 03:15 authored by THANH DUC VAN NGUYEN
Deep learning has made remarkable progress in various applications. In this thesis, we aim to advance robust machine learning, deep semi-supervised learning, and model compression tasks. We improve these learning tasks by developing adversarial regularisation and knowledge distillation. The first objective is to gain an understanding of adversarial attacks and knowledge distillation. Through this exploration, we provide valuable interpretations and insights into both the adversarial attack mechanism and the knowledge distillation process. The second objective is to develop and enhance adversarial regularisation and knowledge distillation technique. This involves investigating the principle of adversarial regularisation and knowledge distillation to improve robust deep learning, deep semi-supervised learning and model compression tasks.
Principal supervisorDinh Phung
Additional supervisor 1Jianfei Cai
Additional supervisor 2He Zhao
Year of Award2024
Department, School or CentreData Science & Artificial Intelligence
CourseDoctor of Philosophy
FacultyFaculty of Information Technology