posted on 2024-01-08, 03:15authored byTHANH DUC VAN NGUYEN
Deep learning has made remarkable progress in various applications. In this thesis, we aim to advance robust machine learning, deep semi-supervised learning, and model compression tasks. We improve these learning tasks by developing adversarial regularisation and knowledge distillation. The first objective is to gain an understanding of adversarial attacks and knowledge distillation. Through this exploration, we provide valuable interpretations and insights into both the adversarial attack mechanism and the knowledge distillation process. The second objective is to develop and enhance adversarial regularisation and knowledge distillation technique. This involves investigating the principle of adversarial regularisation and knowledge distillation to improve robust deep learning, deep semi-supervised learning and model compression tasks.