Session 3: Ethics in AI

Session Overview

In the third session, we focus on ethics in AI for accessibility. We’ll give you information and examples that you can read or listen to. You can also complete quizzes and questions to think more deeply about what you have learned. Let’s get started!

Why do ethics matter?

What do we mean by ethics? Ethics are concerned with developing AI in the right way, so that the AI systems are reliable and trustworthy, that they are fair and don’t disadvantage certain people, and that they respect privacy and confidentiality. Often, we talk about AI systems that need to be fair, accountable and transparent.

As mentioned in the first session, AI technologies offer the possibility of helping people who are blind or low vision with exploring their surroundings, and many more tasks that might be useful. So why do we need to worry about ethics? We will cover some ethical challenges in this session.

Is the system fair or is it biased?

Imagine that an AI system makes decisions about which applicants to shortlist for a job interview. What if the AI system starts to recommend abled people over disabled people for the job, and therefore starts to reinforce this discriminatory bias? Just as AI technologies can amplify existing gender and racial biases in our society, they can also increase disability-based discrimination.  

Bias can happen in various ways in AI. If the data used to train the AI system contains human decisions that are biased, this bias is then passed on to the AI system when used in the real-world. For example, if college recruiters systematically reject applications from students with disabilities, a system trained on that data will replicate the same behaviour.

The AI system might not even explicitly look at disability status disclosed in an application form. Bias can also happen when other data is looked at instead. For example, imagine completing an online test for applying for a job. If the online test is not fully accessible, and the time taken to complete the test or the completion score is interpreted as reflecting the test taker’s level of skill, this will put people using assistive technologies to access the test at a disadvantage.

Who is included or excluded?

Inclusion in AI for accessibility might mean two things: who provides data to train the system, and who uses the system. Let’s look at providing the data to train it first.

Are people with disabilities included in the training data and how? Let’s look at an example of speech recognition systems that are becoming very popular (we’ll look into more details of how they work in session 6). A lot of work has been done to make these AI systems perform well on recognising how people say things, despite their individual differences in pronunciation, voices and accents. However, most of the existing systems do not work well for people with speech impairments, for example a stutter, because training data is not usually provided by people from these communities. A similar example are object recognition apps for people who are blind, like SeeingAI or TapTapSee. Because these apps are trained on data that are collected from sighted people, they are not good at recognising pictures taken by blind users or they can’t recognise objects that sighted people regularly use, like white canes.  

Let’s now look at who is included or excluded from using AI systems. For example, many apps that use AI require a high-end smartphone with large storage capacity. These types of mobile phones are expensive, and not affordable for many people with disabilities. It is also well-know that people in the Global South rarely have access to the latest models of smartphones.

These issues sometimes exclude people with disabilities from interacting with up-and-coming AI technologies. We’ll look more into how to involve people who are blind and low vision in data collection in session 4.  

What is the AI system collecting about me and how is it used?

An AI system could collect your data explicitly, for example when you have given your explicit permission, or implicitly, where you have not.

Data can be collected implicitly and then be used to infer things about the user. For example, AI systems can learn about the disability status of someone by analysing their online data like this research study which showed that AI systems can infer if people are blind by analysing their Twitter profiles and activity, or another study where the model could predict if someone had Parkinson’s disease by their mouse movements. This raises serious concerns as you might never have given permission for the data to be used in this way, and the AI system might use this information in ways you don’t agree with.

Other data, especially used for training AI systems, are usually collected explicitly. However, even if you have given your permission for your data to be collected, your data should still be anonymised, meaning that it cannot be traced back to you. To do this, we human validators need to remove anything that might identify you. This might include any reference to your name or address. For example, let’s say that you are collecting photos to be used for an object recognition app like TapTapSee or SeeingAI. Each photo will need to be checked in case it contains any information that will reveal your identity. For example, if you are taking a picture of your wallet and your credit card is visible in it, we need to make sure that your name and credit card details are removed.

Many apps and AI systems have privacy policies that spell out what data is collected, how it is kept private and confidential, and how that data is used. The policy should also tell you who owns the data after you have given it to an organisation.

Quiz

Let’s now discuss TapTapSee – what happens with your pictures? Take this quiz to find out! You can also read their privacy policy

How reliable is the AI system? Can I trust it?

An AI system is rarely 100% correct. Here’s an excerpt from a diary of a person called Laura published as an opinion paper at the AI and Ethics Journal. Laura has a lot of trouble recognising a cereal box:

7.15 AM. I’m making breakfast and want to find a particular cereal from the many boxes on the shelf. I use an app which, after taking a photograph, can identify and describe an object. I select a box, take a photo, and wait. It describes the colour of the packaging. I rotate the box to its perpendicular side. This time, the app lets me know I’m holding a box. I try a few more attempts, turning the box, moving the phone up and down to focus on different areas of the cereal packet. After around six tries, the phone finally lets me know it is not the box I’m looking for. I select another packet. Three attempts in and I decide to go and ask my partner instead.

Many people with disabilities need to trust and rely on the output of an AI system without the ability to verify that it is correct. For example, someone who is blind or low vision needs to be able to trust that the output of TapTapSee is correct. This sometimes has other effects too. A recent study showed that people who are blind over-trust an AI image captioning system, even when the output made little sense. Research also showed that incorrect predictions from a computer vision system meant people who were blind couldn’t figure out how it worked.

That is why it’s really important to understand how AI systems work, what they are capable of, but also when they make mistakes or don’t work so well. Our sessions are one step towards this, as hopefully they are showing you how these AI systems are put together.

Question

We would love to know what you think about AI systems and ethics, even if we haven’t mentioned them here. What are your concerns about AI systems?

Click here to add your answer

What’s in the next session?

In the next session, we will talk more about models, algorithms and predictions, going into more detail than we covered in session 2. 

Additional resources – list of links

AI and Accessibility: A Discussion of Ethical Considerations

AI and Accessibility: A Discussion of Ethical Considerations

Artificial intelligence and disability: too much promise, yet too little substance?