Improving the interpretability of machine learning approaches with user-generated data

Serra, Giuseppe (2024). Improving the interpretability of machine learning approaches with user-generated data. University of Birmingham. Ph.D.

[img]
Preview
Serra2024PhD.pdf
Text - Accepted Version
Available under License All rights reserved.

Download (4MB) | Preview

Abstract

Given the increasing deployment of automatic decision systems in many critical scenarios, understanding and explaining machine-based decision systems has become an important problem to be solved. For this reason, recently, there has been an increasing interest in what we commonly call Explainable AI (XAI). In this thesis, starting from an overview of the inner mechanisms of deep neural networks, we will tackle the problem of interpretability from two different perspectives.

First, considering that neural networks usually encode the input features as numerical vector representations hardly understandable by humans, we propose new approaches to learning interpretable numerical vectors. In view of the availability of large collections of textual data in different scenarios, we intend to exploit the natural language information to generate vectors with intrinsic interpretability. In this way, the new numerical vectors will have the capacity of being effectively used by neural algorithms while also providing human-understandable information. The proposed methodologies are evaluated on e-commerce data with textual reviews. In this context, given the so-called neural hype, we also critically analyze whether the use of complicated and deep architectures is fully motivated in recommender systems scenarios.

Second, given the inscrutability of the inner reasoning of neural architectures, we develop a new approach capable of highlighting the portion of input information effectively used by the model for its predictions. The methodology is based on the so-called learning to explain paradigm and is applied to graph-based models. The proposed method learns to select subgraphs that are used in all the computational operations until prediction. The inherent interpretability of the model overcomes the limitations of common post-hoc explanation techniques. Furthermore, since the resulting explanations are faithful to the model reasoning, the results can also be used for model debugging and hyperparameter tuning.

Type of Work: Thesis (Doctorates > Ph.D.)
Award Type: Doctorates > Ph.D.
Supervisor(s):
Supervisor(s)EmailORCID
Tino, PeterUNSPECIFIEDUNSPECIFIED
Yao, XinUNSPECIFIEDUNSPECIFIED
Licence: All rights reserved
College/Faculty: Colleges (2008 onwards) > College of Engineering & Physical Sciences
School or Department: School of Computer Science
Funders: European Commission
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
URI: http://etheses.bham.ac.uk/id/eprint/14933

Actions

Request a Correction Request a Correction
View Item View Item

Downloads

Downloads per month over past year