Ethics in AI Wonderland
Growing up in a tech utopia is not all hoverboards and flying cars, but it gets close. “You could see all of these autonomous vehicles being driven on the road near the Google campus,” says Arjun. “I would walk past Apple office spaces on my way to school, and I was immersed in this environment where I understood what AI was. [...] Just by my presence there I was set up to pursue this career.” It is a sunny morning in California when I meet Arjun on a Zoom call. They are a second-year Ph.D. student at UCLA and only two days ago they released their most recent paper on intersectionality in AI ethics publications. I want to talk about their way into the field of AI, and of course, the thought of growing up in silicon valley fascinates me. “It was very much an education in techno-solutionism”, says Arjun “Technology is the best thing in the world. Setting everyone up to be math and science geniuses.”
Arjun’s first encounter with a different perspective on AI happened during their undergraduate studies at UCLA. “It was in one of my film classes that I first learned about Joy Buolamwini and Timnit Gebru’s work on gender shades. We just never learned about that in any of my computer science classes and I thought that it was crazy. I learned in a film class that all of these technologies work so poorly for marginalized communities.” It took Arjun a while to grapple with these disparities. Working to better integrate marginalized communities into developing AI and actually developing AI seemed incompatible and disjointed. “I’ve never actually seen examples of that besides Joy’s and Timinit’s work.”, Arjun says. “But as I got exposed to that space very slowly, through Queer in AI, through spaces like Twitter, and through the handful of people at UCLA who are interested in that space, that opportunity became more realizable to me.”
Going forward, Arjun used this knowledge to be very intentional about their choice of Ph.D. supervisors. “I am supervised by Kai-Wei Chang and Yizhou Sun who encourage me to be in diversity and inclusion initiatives and who prioritize community work. Kai-Wei has done a lot of work in the bias and fairness space in NLP. He is very comfortable and encouraging in pushing the boundaries. It’s not only about ‘debiasing’ models, but about inviting and engaging participation from marginalized communities, taking more critical approaches to fairness and bias. I cannot say often enough how happy I am about the support that I receive.” Keeping a balance between qualitative and quantitative methods, between community work and technical work is important to Arjun. “A large part of my Ph.D. will be bridging historically and socially contextualized fairness work with very hardcore theory like programming and mathematical work. Some of my stuff in the fairness and graph space has been using learning theory and spectral graph theory and other mathematical tools to expose failure modes for these models in terms of fairness. [...] I don’t want to offer a totalizing solution that takes care of all our fairness issues, but at least I can show very rigorously that there is a problem that arises in a certain case where you have certain data distributions and the model operates in a certain way.”
All of this work ties in neatly with their most recent paper. “I am really excited to talk about this,” says Arjun. The paper came about in collaboration with Anaelia Ovalle and Vagrant Gautam, sparking from an intersectionality workshop that Anaelia attended. “So many tools that intersectionality provides can guide how fairness researchers make decisions. How much do you center justice in your work? Are you going beyond documenting it? [...] Are you grappling with the complexity of people's identities? Are you acknowledging your context, where you as a researcher are located, and where the populations you are working with are located? Are you talking about the social and historical context? How does that inform your research? [...] These are questions that the fairness community has talked about for a long time. We’ve been somewhat negligent of intersectionality literature, but hopefully contextualizing it within AI will make it more accessible for AI fairness researchers.”
In their paper, Arjun and their co-authors conducted a survey of 30 AI ethics papers using intersectionality principles as a guiding framework. They assess how these principles are put into practice by the individual papers. “Are they pulling from anti-discrimination legislation, are they pulling from social justice, are they citing black women? [...] There were so many iterations. This was the most I thought critically about a paper in a very long time.” Arjun laughs. If they could wish for one thing in the future, it would be to do more work like this. But they also hope that the paper might inspire a mindset shift that goes beyond literature. “We do so many things in our research work that never make it into a paper, for example, annotation work. I want this to be a full research process guiding framework. Stop yourself once in a while and ask yourself: Am I advancing justice right now? Am I inviting people at the margins to co-create the work that I am doing? Am I band-aiding power structures and inequalities or am I addressing the underlying structures that give rise to the problem?”
Addressing these issues is not free from conflict, which is why Queer in AI has been a great asset for Arjun. “I would not have stayed in AI or academia without Queer in AI. I so often felt isolated in my career not just as a queer person but also as a queer person of color. There is this additional layer, I see very few people like me. I feel like I always question every decision that I make because I can never see other people making the same decisions. It is very isolating.” Queer in AI provided them with a network that brought them research collaborations, internships, and most importantly, mentors. “It’s so amazing that I can message people on Slack and say: I am in this place and situation, what do you think? And for them to give me the sagest wisdom ever. That’s such a blessing. And now a lot of people come to me for help with different things, too. My best advice: Don’t second guess yourself so much.”
For the future, Arjun hopes for more international collaborations, and to continue the work they have done so far. But they are also weary of what might come after the Ph.D. “I don't think I want to be a faculty member in the university because it seems extremely toxic. At the same time, I don’t know who is gonna hire me in industry because it depends on who considers your work valuable.” Before heading off to a hike, Arjun tells me how it wasn't only autonomous vehicles or friends and family in the tech industry, but also their math teacher who inspired them for their current career: “I wasn't much into mathematics or computer science until I had this one teacher in 9th grade. She was my math teacher, she was so great and very sarcastic, very cynical of everything. She was one of the first people who actively believed in my ability to do math. And I remember one thing that she said and that stuck with me for a while: Any job that involves helping other people doesn't pay you anything. It's true. [...] you need to find your special niches, jobs where you can balance the two. Reminding yourself that this is something that is intentionally set up. If you want to help people you won't get paid and therefore it pushes people not to help anybody else in line with individualist capitalism. I have faith that despite how cynical that quote is there will be spaces for me to balance both.”
You can find more of Arjuns’s research and writing here: https://arjunsubramonian.github.io