How does bias affect users of artificial intelligence systems?

Bubakr, Hebah Abdullah (2023). How does bias affect users of artificial intelligence systems? University of Birmingham. Ph.D.

[img] Bubakr2023PhD.pdf
Text - Accepted Version
Restricted to Repository staff only until 31 December 2025.
Available under License All rights reserved.

Download (5MB) | Request a copy

Abstract

In large companies, artificial intelligence (AI) is being used to optimise workflow and ensure efficiency. An assumption is that the AI system should remain unaffected by bias or prejudices to contribute to providing fairer results. For example, in the recruitment process, AI ensures that each applicant is judged by the exact criteria in the job description. Our results suggest otherwise; therefore, we wondered whether the problem of bias extends from the training data (which, replicates existing inequalities in organisations) to the design of the AI systems themselves. These learning systems are dependent on knowledge elicited from human experts. However, if the systems are trained to perform and think in the same way as a human, most of the tools would use unacceptable criteria because people consider many personal parameters that a machine should not use. The question remains whether the potential impact of bias is considered in the design of an AI system.

In this thesis, several experiments are conducted to study unconscious bias in the application of AI with the aid of two qualitative frameworks and two quantitative questionnaires. We first explore the unconscious bias in user interface designs, then examine programmers’ understanding of bias when creating a purposely biased machine using medical databases. A third study addresses the effect of AI recommendations on decision-making, and finally, we explore whether user acceptance is dependent on the type of AI recommendation, testing various suggestions.

This project raises awareness of how the developers of AI and machine learning might have a narrow perspective of ‘bias’ as a statistical problem rather than a social or ethical problem. This limitation is not because they are unaware of these wider concerns but because the requirements relating to the management of data and the implementation of algorithms might restrict their focus to technical challenges. Consequently, bias outcomes can be produced unconsciously because developers are simply not attending to these broader concerns. Creating accurate and effective models is important but so is ensuring that all races, ethnicities and socioeconomic levels are adequately represented in the data model (O’Neil, 2016).

Type of Work: Thesis (Doctorates > Ph.D.)
Award Type: Doctorates > Ph.D.
Supervisor(s):
Supervisor(s)EmailORCID
Baber, ChristopherUNSPECIFIEDUNSPECIFIED
Licence: All rights reserved
College/Faculty: Colleges (2008 onwards) > College of Engineering & Physical Sciences
School or Department: School of Engineering, Department of Electronic, Electrical and Systems Engineering
Funders: Other
Other Funders: Royal Embassy of Saudi Arabia Cultural Bureau
Subjects: B Philosophy. Psychology. Religion > B Philosophy (General)
T Technology > T Technology (General)
T Technology > TA Engineering (General). Civil engineering (General)
URI: http://etheses.bham.ac.uk/id/eprint/13910

Actions

Request a Correction Request a Correction
View Item View Item

Downloads

Downloads per month over past year