Queer in AI @ NeurIPS 2024!

Important Dates

Poster Session:

Tuesday, December 10 - 6:00 - 7:30 PM

West Ballroom D, Vancouver Convention Centre

Workshop:

Wednesday, December 11 - 10:00 AM - 5:00 PM

West - Meeting Room 202-204, Vancouver Convention Centre

Social:

Thursday, December 12 - 8:00 PM onwards

The Metropole Community Pub, 320 Abbott St, Vancouver, V6B 2K9

please fill out this check-in form

🌈 Mission

Queer in AI’s workshop and socials at NeurIPS 2024 aim to act as a gathering space for queer folks to build community and solidarity while enabling participants to learn about key issues and topics at the intersection of AI and queerness.

  • Submission topics

    We encourage submissions about the intersection of AI and queerness, as well as research conducted by queer individuals.

    Submission formats

    We welcome submissions of various formats, including but not limited to research papers, extended abstracts, position papers, opinion pieces, surveys, and artistic expressions.

    Submission perks

    Authors of accepted works will be invited to present their work at the Queer in AI workshop at the NeurIPS 2024 conference. We usually aim to provide complimentary registrations, though we are unable to confirm whether we can provide those this year until we have more in-depth discussions with NeurIPS. Stay tuned for more details / confirmation of what we can or cannot provide this year.

  • If you will need a visa to present in-person at NeurIPS 2024 (being held in Vancouver, Canada), please aim to submit by our visa-friendly submission deadline of Friday, August 30 (Anywhere on Earth).

    Unfortunately, the process of obtaining a visa can be long and arduous, so Queer in AI appreciates having as much heads up as possible so we can try our best to ensure any presenters who need a visa have the time + resources + support to obtain one. For more questions about this, feel free to reach out to us via email (check our contact page for more details).

  • Submit your work to our workshop via OpenReview here.

    For folks who submit by our visa-friendly deadline (Fri, August 30) and require visas to present in-person, notifications of acceptance or denial will be sent out by Sat, September 7 (Anywhere on Earth).

    For folks who submit by our final deadline (Mon, October 7), notifications of acceptance or denial will be sent out by Wed, Oct 30 (Anywhere on Earth).

Tentative Schedule

10:00 AM - 10:30 AM OPENING TALK - Introduction to Queer in AI

10:30 AM - 11:00 AM SPEAKER: Designing Technology for Gender Transition - Tee Chuanromanee

Tee Chuanromanee is a researcher focusing on using technology to support gender transition and investigating the ways that normative transition narratives are embedded into technology. They received their PhD in Computer Science and Engineering at the University of Notre Dame. Deeply involved in LGBTQ HCI community, they served as an organizer for the CHI 2021 Special Interest Group for Queer in HCI and co-organizer for the Trans/Queer in HCI Mentoring Program from 2021-2023. They are presently a Human Factors Engineer at Southwest Airlines in Dallas, Texas, working to improve procedures and safety for airline employees.

11:00 AM - 12:00 PM PANEL: Queer Creative Storytelling - Ashrita Kumar, Léa Demeule, Dr. Theresa Jean Tanenbaum

Ashrita Kumar (they/them) is an artist, activist and lead singer of the Baltimore-based punk band Pinkshift.’

Léa Demeule is a PhD student in Artificial Intelligence at Mila and University of Montreal; she is also an interdisciplinary artist. Her academic research explores fundamental questions in the interplay between discrete and continuous neural representations. She aims to develop methods that promote more careful use of computational resources and serve a socially beneficial purpose. Her art spans visual media, auditory media, web media and interactive media. Léa is especially interested in leveraging her position to curiously and critically engage with artificial intelligence and its ramifications.

Dr. Theresa Jean Tanenbaum (“Tess” – she/fae) is a songwriter, scholar, speaker, poet, performer, storyteller, game designer, artist, activist, and practicing witch. Fae recently left a tenured position as an Associate Professor in the Department of Informatics at UC Irvine where she was a founding member of the Transformative Play Lab. Her most recent book on Playful Wearable Technologies, co-authored with Katherine Isbister, Elena Marquez-Segura, Ella Dagan, and Oguz Burak, was released by The MIT Press in early 2024. Her most recent album, Emotional Regulation, released under the artist name Moth Mother in October 2024, was written in response to the proliferation of anti-LGBTQ+ policies around the globe. Tess’s current work is informed by faer intersecting identities as a queer, Jewish, polyamorous, disabled, neurodivergent, transgender woman, living in the rural Midwest where she owns and operates Moth Mother Studios. On any given day fae might be found writing (music, poetry, games, speculative fiction, memoir, or scholarship), working with faer hands, caring for animals, throwing pots, practicing circus arts, or hanging spooky mobiles from tree branches in her woods. Having cut herself free from the institutional demands of academia she is increasingly disinterested in boundaries, boxes, or categories when it comes to what she creates. The project currently animating faer is The Transition Diary: an autobiographical musical that fae wrote about marriage, gender transition, and self-discovery during the COVID-19 pandemic. Her forthcoming game, Alchemist's Ink, is a handcrafted boutique analog gaming experience for four players that combines ritual, gameplay, theater, and narrative. Players draw magical tattoos on themselves and each other using a black walnut ink that she brewed on the winter solstice out of nuts that she foraged from her land. Dr. Tanenbaum has been instrumental in helping create new, more inclusive, policies within the academic publishing world that make it possible for people to correct their names on previously published scholarship. In 2020 she co- founded the Name Change Policy Working Group to support other transgender people in advocating for inclusive identity policies within publishing and beyond. She has worked with Committee on Publication Ethics (COPE), the ACM, SAGE, Springer, Taylor & Francis, Elsevier, and many other publishers to develop identity practices in publishing that safeguard the privacy of transgender authors seeking to update their scholarly records to reflect their correct names. Although she is no longer at UC Irvine she continues to write, publish, and speak on her areas of scholarship. She will be returning to the classroom this Spring as an Instructional Assistant Professor at Illinois State University in the School of Creative Technologies.

12:00 PM - 1:00 PM BREAK: lunch time!

1:00 PM - 2:00 PM PANEL: Participatory AI and Data Governance - A. Feder Cooper, Anaelia Ovalle, Bernardo Fico, Irene Solaiman, Serena Oduro

A. Feder Cooper - I am a co-founder of the GenLaw Center and future professor of computer science at Yale (starting 2026). Until then, I am a postdoctoral researcher at Microsoft Research and a postdoctoral affiliate at Stanford. I work on reliable measurement and evaluation of machine learning systems. My contributions span uncertainty estimation, privacy and security of generative-AI systems, distributed training, hyperparameter optimization, and model selection. I also do work in tech policy and law, and spend a lot of time finding ways to effectively communicate the capabilities and limits of AI/ML to interdisciplinary audiences and the public.

Anaelia (Elia) Ovalle is a recent addition to the AI & Society team at Meta FAIR and CS PhD graduate from UCLA, where they studied inclusive NLP and AI ethics advised by Prof. Kai-Wei Chang. Elia's research bridges algorithmic fairness with critical social theory to address AI-driven sociotechnical harms. Through their work, they uncovered 1) mechanisms by which gender-diverse biases are systematically encoded in language models (both pretrained and preference-finetuned LLMs) and 2) technical design choices which can encode real-world social harms. They have also contributed to AI Policy for Gender Equality and Diversity with the Global Partnership on Artificial Intelligence (GPAI) and co-led three workshops at ACM FAccT, centering community participation to explore how tensions between AI development and collective practices can be navigated to preserve human connection and address community needs.

Bernardo Fico has a Bachelor's in Law from the University of São Paulo, a Master's degree in International Human Rights Law from the Northwestern Pritzker School of Law (USA), a specialist in Digital Law from UERJ, certified by the International Association of Privacy Professionals (IAPP), and a researcher in the areas of technology and human rights. His also holds concentration and improvement courses in Data Protection (FGV-SP), Digital Law Practice (FGV-SP), Media Policy (Annenberg-Oxford), International Law (OAS), Human Rights (Stanford) and (Luzern), LGBTQIAP+ Rights (Clacso). In addition to his training in technology, Bernardo has extensive international experience, particularly with human rights and diversity, having worked at the Inter-American Court of Human Rights advising on the drafting of sentences and provisional measures, in addition to his work at the International Lesbian and Gay Associatiation (ILGA) and at Bluhm Legal Clinic defending social minorities with a focus on strategic litigation in international forums such as the United Nations.

Irene Solaiman is an AI safety and policy expert. She is Head of Global Policy at Hugging Face, where she is conducting social impact research and leading public policy. Irene serves on the Partnership on AI's Policy Steering Committee, the Center for Democracy and Technology's AI Governance Lab Advisory Committee, and Aspen Digital's AI Elections Advisory Council. Irene advises responsible AI initiatives at OECD and IEEE. Her research includes AI value alignment, responsible releases, and combating misuse and malicious use. Irene was recently named and was named MIT Tech Review's 35 Innovators Under 35 2023 for her research.

Serena Oduro’s work as a senior policy analyst at Data & Society is driven by her dedication to realizing an AI ecosystem that truly benefits us all. Serena leads and manages Data & Society’s involvement in NIST’s US AI Safety Institute and other projects that advance a sociotechnical and rights-forward approach to AI governance. Previously, Serena was a technology equity fellow at The Greenlining Institute, where she provided key support for Greenlining’s sponsorship of the Automated Decision Systems Accountability Act of 2021. Analyzing and reconstructing the AI ecosystem from a Black feminist lens are also passions of Serena, and her work on this can be found in the AI Now Institute's A New AI Lexicon series and in the book Fake AI published by Meatspace Press.

2:00 PM - 3:00 PM Co-working Session #1: Participatory AI and Data Governance

3:00 PM - 3:30 PM Sponsor Booth Networking Session

3:30 PM - 4:30 PM Co-working Session #2: Participatory AI and Data Governance

4:30 PM - 4:45 PM SPEAKER: Queer Producing in Latin America - Be Zilberman

A presentation exploring artistic and tech projects designed by and for the queer community in Brazil and Latin America, highlighting the intersection of creativity, innovation, and activism within queer spaces.

Be Zilberman is a non-binary, neurodivergent, and disabled multi-artist. Founder of Purpurina Films and Productions, which produces films, shows, and festivals aimed at LGBTQIA+ and disabled audiences. They hold a degree in Audiovisual Arts from the University of São Paulo and recently completed a Master's in Documentary Filmmaking at Goldsmiths, University of London. Currently, Be is finishing a second degree in Computer Science, also at the University of São Paulo, with a focus on AI and diversity.

4:45 PM - 5:00 PM Closing Remarks

Accepted Submissions

  • Embracing Queer and Crip Complexity in Machine Learning: Dirty Resilience and Sweaty AI
    Gopinaath Kannabiran, Sacha Knox

  • Gender Trouble in Word Embeddings: A Critical Audit of BERT Guided by Gender Performativity Theory
    Franziska Sofia Hafner

  • The Queer Algorithm
    Guillaume Chevillon

  • OPA: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning

    Harish Karthikeyan, Antigoni Polychroniadou

  • Armadillo: Robust Secure Aggregation for Federated Learning with Input Validation
    Yiping Ma, Yue Guo, Harish Karthikeyan, Antigoni Polychroniadou

  • Community Content Moderation
    Jennifer Chien, Aaron Broukhim, Maya Mundell, Andrea Brown, Margaret Roberts

  • Hybrid Context Retrieval Augmented Generation Pipeline: LLM-Augmented Knowledge Graphs and Vector Database for Accreditation Reporting Assistance
    Candace Edwards

  • Depictions of Queer Mental Health by Grok-2
    Declan Grabb

  • The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models
    Anaelia Ovalle, Krunoslav Lehman Pavasovic, Louis Martin, Luke Zettlemoyer, Eric Michael Smith, Kai-Wei Chang, Adina Williams, Levent Sagun

  • Mitigating Bias in Queer Representation within Large Language Models: A Collaborative Agent Approach
    Tianyi Huang, Arya Somasundaram

Queer in AI @ NeurIPS 2024: Organizers

  • Jaidev Shriram (he/him) jkariyatt@ucsd.edu: Jaidev is a second-year MS student in Computer Science at the University of California, San Diego. His research interests lie in the intersection of computer vision and HCI, focusing on interdisciplinary problems. Specifically, he explores the application of modern generative 2D/3D techniques to develop novel user experiences. He has previously helped organize the Queer in AI workshop at NeurIPS 2022 and 2023.

  • Sarthak Arora (he/any) sarthakvarora@gmail.com: Sarthak is a Climate ML researcher working on problems around wildfires, river remediation and powerplant emissions using Computer Vision algorithms. At Queer in AI, Sarthak has helped organize multiple workshops and socials, while also focusing on policy research around AI harms.

  • Megan Richards (she/her) meganrichards.research@gmail.com: Megan is a Computer Science PhD student at New York University Courant Institute for Mathematical Sciences. Her work focuses on reliable machine learning, with the goal of making models more consistent, representative, and fair. 

  • Arjun Subramonian (they/them) arjunsub@cs.ucla.edu: Arjun is a PhD candidate at the University of California, Los Angeles. They research inclusive and critical approaches to graph learning and natural language processing, including fairness, justice, and ethics. They have been a Queer in AI core organizer for four years, organizing workshops and socials at various ML conferences.

  • Sharvani Jha (she/her) sharvanijha@ucla.edu Sharvani is a software engineer at Microsoft. She got her B.S. in Computer Science from UCLA in 2021. She enjoys queer community building, from founding a queer collegiate hackathon to organizing socials + workshops for Queer in AI.

  • Iheb Belgacem (he/him): Iheb is a Research Engineer at Sony. He earned his Master’s degree in Electrical Engineering from TU Munich and CentraleSupélec. His research focuses on diffusion models and 3D human modeling.

  • Michelle Lin (she/her): Michelle is a Research Assistant and Masters student at the University of Montreal and Mila - Quebec AI Institute. Her research uses deep learning, remote sensing, and computer vision for climate change mitigation applications. At Queer in AI, she helps organize workshops and events.

  • Yanan Long (he/they): Yanan is a research data scientist at the University of Chicago. His research spans applied Bayesian statistics, geometric deep learning, natural language processing and AI ethics, with a thematic focus on biomedicine and healthcare. At Queer in AI, he has been a core organizer for a year and contributes to organizing workshops at multiple ML/AI workshops.

  • Ruchira Ray (she/they) (website link) is an Applied AI Researcher at rStream, where she develops AI systems for on-site waste detection and sorting. Her interests span robotics perception, especially vision and audio, and the social implications of robotics.  She has previously worked on audio anti-spoofing at Samsung, controlled data generation at EPFL and social impacts of robotic automation in households at UT Austin. Ruchira enjoys organising workshops for Queer in AI and Queer in Robotics.

  • Vishal Dey (he/him) dey.78@buckeyemail.osu.edu: Vishal is a Ph.D. candidate in Computer Science and Engineering at The Ohio State University. His research interests primarily include Transfer Learning, Ranking and AI for Science with applications in molecular machine learning and drug discovery. He is passionate about leveraging AI for social good and fostering a more socially inclusive scientific community. He was one of the organizers of the Queer in AI workshops at NeurIPS 2023 and ICML 2024.

  • Bruna Bazaluk (she/her) (bazaluk [at] ime.usp.br) Bruna is a Masters student at University of São Paulo, in Brazil. She got her Bsc in Computer Science from the same institution in 2022. Her research focus is the intersection between Causal Inference and Large Language Models.

  • Ankush Gupta (he/they) ankushg0405@gmail.com Ankush is a final-year B.Tech student in Computer Science at IIIT-Delhi. He focuses his research on Human-Centered AI and edge computing. His research involves utilising deep learning and natural language processing to develop algorithms that improve human interaction and mitigate user biases to make AI systems more inclusive and effective.