AI is everywhere you look these days. What used to only be a part of science fiction books and movies is now helping write papers, draft invitations, produce fanciful images, and even streamline our google searches. However, while AI has shown itself to be a helpful taskmaster, it is not without cause that many of us are ambivalent about its ever-increasing relevance in modern life.
Apryl Williams is an assistant professor of communication and media. Her research has brought to light the ways algorithms negatively impact us; not in the way we sometimes imagine AI taking over the world, but that they can replicate and perpetuate harmful biases that are (either knowingly or unknowingly) built into an AI’s mathematical basis. Her recently published book, “Not My Type: Automating Sexual Racism in Online Dating” highlighted how these biases manifested themselves in the algorithms of online dating.
“The people who are writing the algorithms will create them based on their background, what they learned in school, and their own cultural contexts,” Williams told LSA Magazine for “Hey Siri, Are We Cool?”. “Typically, people who are creating the algorithms are white researchers. They often don’t include women, people of color, or people with disabilities.”
Researchers at Michigan, including Williams, are working towards establishing the U-M Center for Reparative AI. The Center would focus on exploring reparative approaches to AI, emphasizing questions of power and sociohistorical context in the creation of AI algorithms and how they manifest themselves in AI’s outputs. The answers to those questions would be used to name, unmask, and provide redress for the harm AI algorithms can inflict on groups or individuals due to built-in biases.
The Reparative AI Proposal Development Grant team is working to further conceptualize this theoretical framework, and to identify and disambiguate practices of reparative AI happening around the globe. The Reparative AI team will articulate how reparative approaches are not only needed, but accessible and ready to implement using a catalog of case studies, potential interventions, and identifying models of repair from humanistic disciplines to inform theory and practice of reparative AI. With this work, they can jumpstart cutting-edge research into how humanities and AI intersect and enhance their bid to establish the Center for Reparative AI.
“Our project is a critical intervention in the proliferation of AI use and hopes to reorient approaches to algorithmic futures amidst rapid AI development and broad tech injustice.” Said Williams. “We employ ethics of repair which include reconfiguration of power imbalances and reconciliation in communities.”