Monash University
Browse

Explaining the Outcomes of Deep Learning Models

Download (3.1 MB)
thesis
posted on 2022-10-10, 22:57 authored by XUELIN SITU
When users do not understand how machine learning models arrive at their outcomes, these models are opaque to the users. Providing explanations for the outcomes of these models is deemed an important step in making them transparent to users. In this thesis, we consider the explanation process as a two-step problem. The first step is to understand the feature attributions of the model outcome, and the second step is to incorporate these features with general human interpretability to form a comprehensible explanation. This thesis firstly aims to improve the stability of feature attributions and algorithmic efficiency, while maintaining the faithfulness to the underlying machine learning models; and lastly draw upon the research in philosophy, psychology and cognitive science, and generate human-interpretable explanations.

History

Campus location

Australia

Principal supervisor

Gholamreza Haffari

Additional supervisor 1

Ingrid Zukerman

Additional supervisor 2

Sameen Maruf

Additional supervisor 3

Cecile Paris

Year of Award

2022

Department, School or Centre

Data Science & Artificial Intelligence

Course

Doctor of Philosophy

Degree Type

DOCTORATE

Faculty

Faculty of Information Technology

Usage metrics

    Faculty of Information Technology Theses

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC