Seven Ways to Put Intersectionality into Your Research

A humanoid robot stands in an intersection. It wears a pink knitted hat and holds a sign that says "End systemic racism". In the background there is a rainbow and two rainbow flags

If you are interested in AI ethics, it’s quite likely that the term ‘intersectionality’ has come up on your radar. Intersectionality seeks to examine and fight the interlocking mechanisms of oppression. Research into the mechanisms by which oppression is created informs how to combat it. Alongside the collective efforts of feminists of colour involved the civil rights and social justice movements, Kimberlé Crenshaw coined the term in 1989 and has since inspired a broad range of both academic publications and social practice.

No matter if you work in AI fairness or if you are in the “my model solves this task better than your model”-business, there are possibilities to let intersectionality guide your decisions. A while ago I talked to both Arjun Subramonian and Vagrant Gautam about tech optimism and the challenges of standing up to it (read the full interviews here and here). Building on the research Arjun and their wonderful collaborators, Anaelia Ovalle, Vagrant Gautam, Gilbert Gee, and Kai-Wei Chang, have done here are 7 recommendations:

  1. Think beyond subgroup fairness - In their survey of 30 AI ethics papers, the authors show that intersectionality is often equated with subgroup fairness: The question if a model performs well on data that combines several underrepresented attributes. While this is one way to study how a model affects people who are marginalised along several axes, intersectionality is not centred around identities. It is a framework that deals with power and inequality. Rather than going into possibly meaningless combinations of protected attributes, researchers should call into focus why specific subgroups are underrepresented and disproportionately targeted. 

  2. Anti-discrimination laws are not the upper bound - Conforming with anti-discrimination laws makes a model attractive to users who want to be safe against legal repercussions. But this approach turns a blind eye to the social and historical background of such legislation. Anti-discrimination laws frame discrimination as a single act that has a victim and a perpetrator. Seemingly ‘unintentional’ discrimination, like perpetuating already existing harms, is rendered invisible. Those who profit from the status quo are released from the responsibility to change it. And marginalised communities can still be harmed by systems that pass a legal audit. 

  3. Read what you cite - The authors found many examples in which AI papers cited literature when referring to intersectionality. But often the cited papers do not actually mention intersectionality! Looking over the edges of our own discipline is hard, especially with a looming paper deadline. But if you cite intersectionality, be sure to not only reference computer science papers but the law, sociology, and social science papers (often authored by Black women) that terms were first defined in. A good starting point is for example (1)

  4. Don’t oversimplify - When you use statistical methods to infer group attributes that are missing in your data, state both statistical and social assumptions you make. Why is the data missing? Are inequalities emerging through the assumptions you make? Exercise intellectual vigilance.

  5. There are things that AI systems shouldn’t do (no matter how ‘fairly’ they do it) - Predicting crime or predicting a person's gender or sexual orientation are applications that are harmful in and of themselves. Moreover they build on false assumptions, e.g. that gender and sexuality are fixed attributes that can be observed out of context or inferred from pictures of people, the texts they write or the products they buy.  

  6. Justice doesn’t stop at the lab door - Not many papers pair their contributions with social actions. Are there ways in which you can make model development more participatory or transparent? One example of participatory and justice centred research is (2).

  7. Keep yourself in the equation - In the narrative conventions of AI papers western context is positioned as the default - only deviations from it are explicitly stated. But even when this context is acknowledged, being in the global north shouldn't be stated as a blanket limitation to the model and hidden in a caveats section at the end of the paper. Social context influences the whole development process. Acknowledge the power you have. Your goals and values get encoded in the system design. Make your choices obvious and be willing to set them up for change and iteration.


Go here for more research and writing by Vagrant Gautam, Arjun Subramonian and Anaelia Ovalle.


A picture of a white person wearing a blue and white patterned shirt

This post was written by Sabine Weber. Sabine is a queer person who just finished their PhD at the University of Edinburgh. They are interested in multilingual NLP, AI ethics, science communication and art. They organized Queer in AI socials and were one of the Social Chairs at NAACL 2021. You can find them on twitter as @multilingual_s

Previous
Previous

Making a Home for all the Parts

Next
Next

The Octopus Writes My Emails Now