OIPO Disability Abstracts: Artificial Intelligence, Machine Learning, and Virtual Reality

Updated 11/27/2024

Artificial Intelligence and Machine Learning

Binns, R., & Kirkham, R. (2021). How could equality and data protection law shape AI fairness for people with disabilities? arXiv – CS – Computers and Society. DOI: arxiv-2107.05704.

This article examines the concept of ‘AI fairness’ for people with disabilities from the perspective of data protection and equality law. This examination demonstrates that there is a need for a distinctive approach to AI fairness that is fundamentally different to that used for other protected characteristics, due to the different ways in which discrimination and data protection law applies in respect of Disability. We articulate this new agenda for AI fairness for people with disabilities, explaining how combining data protection and equality law creates new opportunities for disabled people’s organisations and assistive technology researchers alike to shape the use of AI, as well as to challenge potential harmful uses.

Goggin, G., Prahl, A., & Zhuang, K. V. (2023). Communicating AI and Disability. In  M. S. Jeffress, J. M. Cypher, J. Ferris, & J. Scott-Pollock (Eds.), The Palgrave Handbook of Disability and Communication (pp. 205-220). Cham: Palgrave Macmillan. DOI:  https://doi.org/10.1007/978-3-031-14447-9_13.

This chapter looks at a relatively new area of disability and communication: AI. It contends that discourses, language, and representation of disability in relation to AI need to be understood against the backdrop of evolving ideas of disability and technology. It critiques the dominant social imaginaries of AI and disability, which obscure the flaws in the mainstream ways that autonomous intelligent systems such as AI developed. The chapter concludes that AI and its dominant social imaginaries are in the throes of a severe crisis of legitimacy. Accordingly, alternative imaginaries are discussed as ways to reimagine and remake AI, machine learning, intelligent systems, and other technologies as sustainable, just, and conducive to the goals of extending accessibility, inclusion, participation, and rights for people with disabilities.

Jafry, A., & Vorstermans, J. (2024). Evolving intersections: AI, disability, and academic integrity in higher education. New Directions for Teaching and Learning Early View. DOI: https://doi.org/10.1002/tl.20629.

In this article, we investigate the critical intersections of AI, academic integrity, and disability in the context of a large undergraduate course. Our aim was to adapt the course to respond to generative AI (GenAI) to avoid entrenching barriers for students, and instead teach them how to use GenAI tools in ways that deepen their learning and uphold academic honesty. Grounded in disability justice and access pedagogies, we outline five design goals centered on guidelines for AI usage, education on responsible AI use, revised assessments, support for teaching assistants (TAs), and accessible materials. These activities are detailed in our methodology. In our findings, we provide a critical reflection of the course adaptation, taking up issues such as varying levels of familiarity with GenAI, students’ capacity to engage with course changes, resistance to GenAI, instructors’ relational shifts to AI, and feelings of demoralization among the teaching team. We conclude by offering practical recommendations for educators, calling for learning communities to view this disruption as an invitation to listen to disabled students.

Lillywhite, A., & Wolbring, G. (2019). Coverage of ethics within the artificial intelligence and machine learning academic literature: The case of disabled people. Assistive Technology Online Before Print. DOI: https://doi.org/10.1080/10400435.2019.1593259.

Disabled people are often the anticipated users of scientific and technological products and processes advanced and enabled by artificial intelligence (AI) and machine learning (ML). Disabled people are also impacted by societal impacts of AI/ML. Many ethical issues are identified within AI/ML as fields and within individual applications of AI/ML. At the same time, problems have been identified in how ethics discourses engage with disabled people. The aim of our scoping review was to better understand to what extent and how the AI/ML focused academic literature engaged with the ethics of AI/ML in relation to disabled people.  Of the n = 1659 abstracts engaging with AI/ML and ethics downloaded from Scopus (which includes all Medline articles) and the 70 databases of EBSCO ALL, we found 54 relevant abstracts using the term “patient” and 11 relevant abstracts mentioning terms linked to “impair*”, “disab*” and “deaf”. Our study suggests a gap in the literature that should be filled given the many AI/ML related ethical issues identified in the literature and their impact on disabled people.”

Lillywhite, A., & Wolbring, G. (2020). Coverage of artificial intelligence and machine learning within academic literature, Canadian newspapers, and Twitter tweets: The case of disabled people. Societies, 10(3). DOI: https://doi.org/10.3390/soc10010023.

Artificial intelligence (AI) and machine learning (ML) advancements increasingly impact society and AI/ML ethics and governance discourses have emerged. Various countries have established AI/ML strategies. “AI for good” and “AI for social good” are just two discourses that focus on using AI/ML in a positive way. Disabled people are impacted by AI/ML in many ways such as potential therapeutic and non-therapeutic users of AI/ML advanced products and processes and by the changing societal parameters enabled by AI/ML advancements. They are impacted by AI/ML ethics and governance discussions and discussions around the use of AI/ML for good and social good. Using identity, role, and stakeholder theories as our lenses, the aim of our scoping review is to identify and analyze to what extent, and how, AI/ML focused academic literature, Canadian newspapers, and Twitter tweets engage with disabled people. Performing manifest coding of the presence of the terms “AI”, or “artificial intelligence” or “machine learning” in conjunction with the term “patient”, or “disabled people” or “people with disabilities” we found that the term “patient” was used 20 times more than the terms “disabled people” and “people with disabilities” together to identify disabled people within the AI/ML literature covered. As to the downloaded 1540 academic abstracts, 234 full-text Canadian English language newspaper articles and 2879 tweets containing at least one of 58 terms used to depict disabled people (excluding the term patient) and the three AI terms, we found that health was one major focus, that the social good/for good discourse was not mentioned in relation to disabled people, that the tone of AI/ML coverage was mostly techno-optimistic and that disabled people were mostly engaged with in their role of being therapeutic or non-therapeutic users of AI/ML influenced products. Problems with AI/ML were mentioned in relation to the user having a bodily problem, the usability of AI/ML influenced technologies, and problems disabled people face accessing such technologies. Problems caused for disabled people by AI/ML advancements, such as changing occupational landscapes, were not mentioned. Disabled people were not covered as knowledge producers or influencers of AI/ML discourses including AI/ML governance and ethics discourses. Our findings suggest that AI/ML coverage must change, if disabled people are to become meaningful contributors to, and beneficiaries of, discussions around AI/ML.

Morrison, R. J. (2019, Summer). Ethical depictions of neurodivergence in SF about AI. Configurations, 27(3), 387-410. DOI: https://doi.org/10.1353/con.2019.0021.

In science fiction (SF), representations of artificial intelligence (AI) run the gamut from being cognizant of the full spectrum of potential human emotion, to lacking any comparable emotional states. When a feeling/unfeeling AI—the novum of the text—interacts with human characters, the presence of strong emotional capability is shown to be positive, and any absence of emotional capability is shown to be negative, even abject. This aligns perceived emotional capability with normality, establishing that the empirical “zero world” of the text is one in which those who lack normative emotional affect lack value.

Newman-Griffis, D., Sage Rauchberg, J., Alharbi, R., Hickman, L., & Hochheiser, H. (2022). Alternative models: Critical examination of disability definitions in the development of artificial intelligence technologies. arXiv:2206.08287 [cs.AI]. DOI: https://doi.org/10.48550/arXiv.2206.08287.

Disabled people are subject to a wide variety of complex decision-making processes in diverse areas such as healthcare, employment, and government policy. These contexts, which are already often opaque to the people they affect and lack adequate representation of disabled perspectives, are rapidly adopting artificial intelligence (AI) technologies for data analytics to inform decision making, creating an increased risk of harm due to inappropriate or inequitable algorithms. This article presents a framework for critically examining AI data analytics technologies through a disability lens and investigates how the definition of disability chosen by the designers of an AI technology affects its impact on disabled subjects of analysis. We consider three conceptual models of disability: the medical model, the social model, and the relational model; and show how AI technologies designed under each of these models differ so significantly as to be incompatible with and contradictory to one another. Through a discussion of common use cases for AI analytics in healthcare and government disability benefits, we illustrate specific considerations and decision points in the technology design process that affect power dynamics and inclusion in these settings and help determine their orientation towards marginalisation or support. The framework we present can serve as a foundation for in-depth critical examination of AI technologies and the development of a design praxis for disability-related AI analytics.

Nugent, S. E., & Scott-Parker, S. (2022). Recruitment AI has a disability problem: Anticipating and mitigating unfair automated hiring decisions. In M. I. Aldinhas Ferreira & M. Osman Tokhi (Eds.), Towards Trustworthy Artificial Intelligent Systems [Intelligent Systems, Control and Automation: Science and Engineering Vol. 102]. (pp 85–96).

Artificial Intelligence (AI) technologies have the potential to dramatically impact the lives and life chances of people with disabilities seeking employment and throughout their career progression. While these systems are marketed as highly capable and objective tools for decision making, a growing body of research demonstrates a record of inaccurate results as well as inherent disadvantages for historically marginalised groups. Assessments of fairness in Recruitment AI for people with disabilities have thus far received little attention or have been overlooked. This paper examines the impacts to and concerns of disabled employment seekers using AI systems for recruitment, and discusses recommendations for the steps employers can take to ensure innovation in recruitment is also fair to all users. In doing so, we further the point that making systems fairer for disabled employment seekers ensures systems are fairer for all.

Packin, N. G., (2020, November 3). Disability Discrimination Using AI Systems, Social Media and Digital Platforms: Can We Disable Digital Bias?  SSRN. DOI: http://dx.doi.org/10.2139/ssrn.3724556.

Social media platforms and digital technological tools have transformed how people manage their day-to-day lives, socially as well as professionally. Big data algorithms help us improve our decision-making processes, and sophisticated social networks, enable us to get connected to other individuals and organizations, get exposed to information, and even learn about different opportunities. But as individuals come to be more and more comfortable with social networks and big data algorithms, fewer give much thought to how personal data gleaned from social networks and fed into algorithms affects the administration of government and the provision of private services. Algorithmic assessment of personal characteristics enables widescale discrimination by government and private entities, and such discrimination is particularly pernicious for persons with disabilities.

According to the social model of disability, disability is not only inherent to the individual and determined by the impairment but is also a product of the social environment. Social expectations, conventions, and technology determine which traits are outside the norm and which traits are disabling. Whether a technology perpetuates or mitigates disability depends on social norms, including norms that are embedded in law. A wheelchair might mitigate the impairment, but only if legal rules dictate a built environment where wheelchair users and non-wheelchair users can move in a similar fashion, can the disability be mitigated. Similarly, digital technologies can limit the ways in which some traits are disabling only if bias and discriminatory features against individuals with disabilities are not embedded within their use. We must ensure that technology developments continue to improve the life quality and opportunities for individuals with disabilities, and that we design systems that better accommodate the disabled, enhance their access, and help level the playing field between them and the able-bodied. We should regulate to ensure that individuals with disabilities are legally protected from discrimination. Additionally, and not less importantly, we must make sure that individuals with disabilities are not left out of innovations because of the difficulty in detecting the different types of disabilities as well as disability bias, proving it, and designing around it.

Parvin, N. (2019). Look up and smile! Seeing through Alexa’s algorithmic gaze. In K. Fritsch, A. Hamraie, M. Mills & D. Serlin (Eds.), Crip Technoscience [Special Section]. Catalyst, 5(1). DOI: https://doi.org/10.28968/cftt.v5i1.29592.

Echo Look is one latest product by Amazon built on the artificial intelligence agent Alexa designed to be a virtual fashion assistant. This paper draws on feminist theory to critically engage with the premises and promises of this new technology. More specifically, I demonstrate how the introduction of Echo Look is an occasion to think through ethical and political issues at stake in the particular space it enters, in this case no less than what is perceived of (women’s) bodies and what fashion is and does. In addition, the specific domain helps us see this category of technology anew, illuminating its taken-for-granted assumptions. More specifically, it serves as yet another reminder of what algorithms cannot do and of their oppressive potency.

Ringel Morris, M. (2020, June). AI and accessibility:  A discussion of ethical considerations [Viewpoint]. Communications of the ACM, 63(6), 35-37. DOI: https://doi.org/10.1145/3356727.

“According to the World Health Organization, more than one billion people worldwide have disabilities, the field of disability studies defines disability through a social lens; people are disabled to the extent that society creates accessibility barriers. AI technologies offer the possibility of removing many accessibility barriers; for example, computer vision might help people who are blind better sense the visual world, speech recognition and translation technologies might offer real time captioning for people who are hard of hearing, and new robotic systems might augment the capabilities of people with limited mobility. Considering the needs of users with disabilities can help technologists identify high-impact challenges whose solutions can advance the state of AI for all users; however, ethical challenges such as inclusivity, bias, privacy, error, expectation setting, simulated data, and social acceptability must be considered” (p. 35).

Robertson, S., Magee, L., & Soldatić, K. (2022). Intersectional inquiry, on the ground and in the algorithm. Qualitative Inquiry, 28(7), 814–826. DOI: https://doi.org/10.1177/10778004221099560.

This article makes two key contributions to methodological debates in automation research. First, we argue for and demonstrate how methods in this field must account for intersections of social difference, such as race, class, ethnicity, culture, and disability, in more nuanced ways. Second, we consider the complexities of bringing together computational and qualitative methods in an intersectional methodological approach while also arguing that in their respective subjects (machines and human subjects) and conceptual scope they enable a specific dialogue on intersectionality and automation to be articulated. We draw on field reflections from a project that combines an analysis of intersectional bias in language models with findings from a community workshop on the frustrations and aspirations produced through engagement with everyday artificial intelligence (AI)–driven technologies in the context of care.

Shew, A. (2020, March). Ableism, Technoableism, and Future AI. IEEE Technology and Society Magazine, 39(3), 40-85. DOI: https://doi.org/10.1109/MTS.2020.2967492.

Ableism (discrimination in favor of nondisabled people and against disabled people1) impacts technological imagination. Like sexism, racism, and other types of bigotry, ableism works in insidious ways: by shaping our expectations, it shapes how and what we design (given these expectations), and therefore the infrastructure all around us. And ableism shapes more than just the physical environment. It also shapes our digital and technological imaginations – notions of who will “benefit” from the development of Artificial Intelligence (AI) and the ways that those systems are designed and implemented are a product of how we envision the “proper” functioning of bodies and minds.

Smith, P., & Smith, L. (2021). Artificial intelligence and disability: Too much promise, yet too little substance? AI Ethics 1, 81–86. DOI: https://doi.org/10.1007/s43681-020-00004-5.

Much has been written about the potential of artificial intelligence (AI) to support, and even transform, the lives of disabled people. It is true that many advances have been made, ranging from robotic arms and other prosthetic limbs supported by AI, decision support tools to aid clinicians and the disabled themselves, and route planning software for those with visual impairment. Many individuals are benefiting from the use of such tools, improving our accessibility and changing lives. But what are the true limits of such tools? What are the ethics of allowing AI tools to suggest different courses of action, or aid in decision-making? And does AI offer too much promise for individuals? I have recently undergone a life changing accident which has left me severely disabled, and together with my daughter who is blind, we shall explore the day-to-day realities of how AI can support, and frustrate, disabled people. From this, we will draw some conclusions as to how AI software and technology might best be developed in the future.

Tilmes, N. (2022). Disability, fairness, and algorithmic bias in AI recruitment. Ethics and Information Technology, 24, Article 21.  DOI: https://doi.org/10.1007/s10676-022-09633-2.

While rapid advances in artificial intelligence (AI) hiring tools promise to transform the workplace, these algorithms risk exacerbating existing biases against marginalized groups. In light of these ethical issues, AI vendors have sought to translate normative concepts such as fairness into measurable, mathematical criteria that can be optimized for. However, questions of disability and access often are omitted from these ongoing discussions about algorithmic bias. In this paper, I argue that the multiplicity of different kinds and intensities of people’s disabilities and the fluid, contextual ways in which they manifest point to the limits of algorithmic fairness initiatives. In particular, existing de-biasing measures tend to flatten variance within and among disabled people and abstract away information in ways that reinforce pathologization. While fair machine learning methods can help mitigate certain disparities, I argue that fairness alone is insufficient to secure accessible, inclusive AI. I then outline a disability justice approach, which provides a framework for centering disabled people’s experiences and attending to the structures and norms that underpin algorithmic bias.

Trewin, S., Basson, S., Muller, M., Branham, S., Treviranus, J., Gruen, D., Hebert, D., Lyckowski, N., & Manser, E. (2019, September). Considerations for AI Fairness for People with Disabilities. AI Matters, 5(3), 40-63. DOI: https://doi.org/10.1145/3362077.3362086.

In society today, people experiencing disability can face discrimination. As artificial intelligence solutions take on increasingly important roles in decision-making and interaction, they have the potential to impact fair treatment of people with disabilities in society both positively and negatively. We describe some of the opportunities and risks across four emerging AI application areas: employment, education, public safety, and healthcare, identified in a workshop with participants experiencing a range of disabilities. In many existing situations, non-AI solutions are already discriminatory, and introducing AI runs the risk of simply perpetuating and replicating these flaws. We next discuss strategies for supporting fairness in the context of disability throughout the AI development lifecycle. AI systems should be reviewed for potential impact on the user in their broader context of use. They should offer opportunities to redress errors, and for users and those impacted to raise fairness concerns. People with disabilities should be included when sourcing data to build models, and in testing, to create a more inclusive and robust system. Finally, we offer pointers into an established body of literature on human centered design processes and philosophies that may assist AI and ML engineers in innovating algorithms that reduce harm and ultimately enhance the lives of people with disabilities.

White, J. J. G. (2022). Artificial intelligence and people with disabilities: A reflection on human–AI partnerships. In F. Chen & J. Zhou (Eds.), Humanity driven AI: Productivity, well-being, sustainability and partnership (pp 279–310). Springer, Cham. DOI: https://doi.org/10.1007/978-3-030-72188-6_14.

Artificial intelligence (AI) has much potential to enhance opportunities and independence for people with disabilities by addressing practical problems that they encounter in a variety of domains. Indeed, the partnership between AI and people with disabilities already has a history that spans several decades, through the use of assistive technologies based, for example, on speech recognition, optical character recognition, word prediction, and text-to-speech conversion. Contemporary developments in machine learning can extend and enhance the capabilities of such assistive technology applications, while opening the way to further improvements in accessibility. AI applications intended to benefit people with disabilities can also give rise to questions of values and priorities. These issues are here discussed in relation to the role of design practices and policy in shaping the solutions adopted. AI can also contribute to discrimination on grounds of disability, especially if machine learning algorithms are substituted partly or completely for human decision making. The potential for bias and strategies for overcoming it raise as yet unresolved research questions. In exploring some of these considerations, a case is developed for favoring approaches which shape the normative and social context in which AI technologies are developed and used, as well as the technical details of their design.

Virtual Reality

Brandt, M., & Messeri, L.  (2019). Imagining feminist futures on the small screen: Inclusion and care in VR fictions. In C. Bruun Jensen & A. Kemiksiz (Eds.), Anthropology and Science Fiction: Experiments in Thinking Across Worlds [Feature Issue]. Nature Culture Issue 5. https://www.natcult.net/journal/issue-5/imagining-feminist-futures-on-the-small-screen/.

Virtual reality signifies not only an immersive media technology, but also a cultural desire to allow bodies to inhabit other worlds as easily as pushing a button or putting on goggles. As the VR industry has grown, so too have popular imaginings of its potential. We draw on feminist technoscience studies to analyze and evaluate recent VR science fiction media narratives. How do they articulate VR’s role in the future, and for whom? Who are the heroes of these worlds and what makes them heroic? Steven Spielberg’s would-be blockbuster Ready Player One (2018) (RPO) offers a techno-masculine narrative in which a hero saves the world. In contrast to RPO, television and streaming small screen science fiction narratives have focused on the extent to which VR can save not worlds, but individuals. A surprisingly consistent trope has emerged in these shows: one of VR as a therapeutic tool for a woman coping with trauma. While certainly a departure from RPO’s Hollywood vision of VR, this analysis examines how episodes of Reverie, Philip K. Dick’s Electric Dreams, Kiss Me First, and Black Mirror offer visions of VR that reflect the feminist ambitions of the contemporary VR industry.

Jiang, Z., Meltzer, A., & Zhang, X. (2023). Using virtual reality to implement disability studies’ advocacy principles: Uncovering the perspectives of people with disability. Disability & Society. DOI: https://doi.org/10.1080/09687599.2022.2150601.

One central aim of disability studies is to shift understandings of disability, such that disability comes to be understood as about the social disadvantage/oppression that people face when society does not cater to impairment of body/mind. Nevertheless, there remains a need for more practical tools for disability advocacy, through which to transmit disability studies’ ideas of disability to the general community. Drawing on a qualitative study of the perspectives of 23 people with physical and sensory impairments, this paper proposes virtual reality as an advocacy tool to communicate the principles and beliefs of disability studies. The findings highlight that, due to the nature of the technology, participants feel virtual reality has clear potential as a disability advocacy tool that can facilitate empathy, perspective-taking and positive social change, with a particular focus on how it is the environmental barriers and social attitudes around people that disables them.

Redden, R. (2018, April 11). VR: An Altered Reality for Disabled Players. First Personal Scholar. Waterloo, ON: The Games Institute (GI) at the University of Waterloo in collaboration with IMMERSe, The Research Network for Video Game Immersion. Retrieved from: http://www.firstpersonscholar.com/vr-altered-reality/.

“…gather(s) the experiences and ideas of accessibility advocates who are working to inform VR’s trajectory. [The author is also]… providing..[a]… perspective of the VR station and its access. By putting existing ideas and experiences together, …[the author] hope[s] to promote the work that folks with disabilities are already doing in advising (and designing) games themselves, and the role of the public VR station in advocating and creating better VR” (n.p.)

Zhang, K., Deldari, E., Lu, Z., Yao, Y., & Zhao, Y. (2022). “It’s Just Part of Me:” Understanding Avatar Diversity and Self-presentation of People with Disabilities in Social Virtual Reality. In The 24th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS ’22), October 23–26, 2022, Athens, Greece. New York: ACM. DOI: https://doi.org/10.1145/3517428.3544829.

In social Virtual Reality (VR), users are embodied in avatars and interact with other users in a face-to-face manner using avatars as the medium. With the advent of social VR, people with disabilities (PWD) have shown an increasing presence on this new social media. With their unique disability identity, it is not clear how PWD perceive their avatars and whether and how they prefer to disclose their disability when presenting themselves in social VR. We fill this gap by exploring PWD’s avatar perception and disability disclosure preferences in social VR. Our study involved two steps. We first conducted a systematic review of fifteen popular social VR applications to evaluate their avatar diversity and accessibility support. We then conducted an in-depth interview study with 19 participants who had different disabilities to understand their avatar experiences. Our research revealed a number of disability disclosure preferences and strategies adopted by PWD (e.g., reflect selective disabilities, present a capable self). We also identified several challenges faced by PWD during their avatar customization process. We discuss the design implications to promote avatar accessibility and diversity for future social VR platforms.