Phase 2 Data Collection is now closed!
Participate in Phase 2 Data Collection!
We are now looking for more participants to help us to collect data until the 29th of November. Read about how to participate in collecting data for Phase 2: https://orbit.city.ac.uk/phase-2-data-collection/
Want to help us collect data but don’t know how to start? Join us online to show you how! More info at: https://orbit.city.ac.uk/phase-2-data-collection/#trainingsessions
What is the ORBIT project about?
Novel smartphones apps using Artificial Intelligence (A.I.) are really useful in making visual information accessible to people who are blind or low vision. For instance, Seeing A.I. or TapTapSee allow you to take a picture of your surroundings, and then they tell you what things are recognised, for example, “a person sitting on a sofa”. While A.I. recognises objects in a scene if they are common, at the moment these apps can’t tell you which of the things it recognises is yours, and they don’t know about things that are particularly important to users who are blind or low vision.
Using A.I. techniques in computer vision to recognise objects has made great strides, it does not work so well for personalised object recognition. Previous research has started to make some advances to solving the problem by looking at how people who are blind or low vision take pictures, what algorithms could be used to personalise object recognition, and the kinds of data that are best suited for enabling personalised object recognition. However, research is currently held back by the lack of available data, particularly from people who are blind or low vision, to use for training and then evaluating A.I. algorithms for personalised object recognition.
This project, funded by Microsoft A.I. for Accessibility, aims to construct a large dataset by involving blind people. Unlike previous research efforts, our team will collect videos since they provide a richer set of information than images. The dataset will be made publicly available for download in two phases: Phase 1 will include about 100 users and thousands of videos, while Phase 2 will gather data from about 1,000 users and contain more than 10,000 videos. We anticipate that our dataset might be useful for researchers and developers to implement new algorithms in existing apps or novel wearable systems.
In addition to a dataset, we will also develop a curriculum that can teach people who are blind and low vision about how A.I. works and the importance of data, and how to get involved in developing A.I. themselves.