Monash University
Monash_PhD_thesis.pdf (66.12 MB)

Adversarial Regularisation and Knowledge Distillation for Deep Learning Tasks.

Download (66.12 MB)
posted on 2024-01-08, 03:15 authored by THANH DUC VAN NGUYEN
Deep learning has made remarkable progress in various applications. In this thesis, we aim to advance robust machine learning, deep semi-supervised learning, and model compression tasks. We improve these learning tasks by developing adversarial regularisation and knowledge distillation. The first objective is to gain an understanding of adversarial attacks and knowledge distillation. Through this exploration, we provide valuable interpretations and insights into both the adversarial attack mechanism and the knowledge distillation process. The second objective is to develop and enhance adversarial regularisation and knowledge distillation technique. This involves investigating the principle of adversarial regularisation and knowledge distillation to improve robust deep learning, deep semi-supervised learning and model compression tasks.


Campus location


Principal supervisor

Dinh Phung

Additional supervisor 1

Jianfei Cai

Additional supervisor 2

He Zhao

Year of Award


Department, School or Centre

Data Science & Artificial Intelligence


Doctor of Philosophy

Degree Type



Faculty of Information Technology