Queer in AI @ NeurIPS 2024!

Important Dates

Poster Session:

Tuesday, December 10 - 6:00 - 7:30 PM

Ballroom AB (East) / D (West), Vancouver Convention Centre

Workshop:

Wednesday, December 11 - 10:00 AM - 5:00 PM

West - Meeting Room 202-204, Vancouver Convention Centre

Social:

Thursday, December 12 - 8:00 PM onwards

The Metropole Community Pub, 320 Abbott St, Vancouver, V6B 2K9

please fill out this check-in form

🌈 Mission

Queer in AI’s workshop and socials at NeurIPS 2024 aim to act as a gathering space for queer folks to build community and solidarity while enabling participants to learn about key issues and topics at the intersection of AI and queerness.

  • Submission topics

    We encourage submissions about the intersection of AI and queerness, as well as research conducted by queer individuals.

    Submission formats

    We welcome submissions of various formats, including but not limited to research papers, extended abstracts, position papers, opinion pieces, surveys, and artistic expressions.

    Submission perks

    Authors of accepted works will be invited to present their work at the Queer in AI workshop at the NeurIPS 2024 conference. We usually aim to provide complimentary registrations, though we are unable to confirm whether we can provide those this year until we have more in-depth discussions with NeurIPS. Stay tuned for more details / confirmation of what we can or cannot provide this year.

  • If you will need a visa to present in-person at NeurIPS 2024 (being held in Vancouver, Canada), please aim to submit by our visa-friendly submission deadline of Friday, August 30 (Anywhere on Earth).

    Unfortunately, the process of obtaining a visa can be long and arduous, so Queer in AI appreciates having as much heads up as possible so we can try our best to ensure any presenters who need a visa have the time + resources + support to obtain one. For more questions about this, feel free to reach out to us via email (check our contact page for more details).

  • Submit your work to our workshop via OpenReview here.

    For folks who submit by our visa-friendly deadline (Fri, August 30) and require visas to present in-person, notifications of acceptance or denial will be sent out by Sat, September 7 (Anywhere on Earth).

    For folks who submit by our final deadline (Mon, October 7), notifications of acceptance or denial will be sent out by Wed, Oct 30 (Anywhere on Earth).

Tentative Schedule

10:00 AM - 10:30 AM OPENING TALK - Introduction to Queer in AI

10:30 AM - 11:00 AM

SPEAKER: Designing Technology for Gender Transition - Tee Chuanromanee

Tee Chuanromanee is a researcher focusing on using technology to support gender transition and investigating the ways that normative transition narratives are embedded into technology. They received their PhD in Computer Science and Engineering at the University of Notre Dame. Deeply involved in LGBTQ HCI community, they served as an organizer for the CHI 2021 Special Interest Group for Queer in HCI and co-organizer for the Trans/Queer in HCI Mentoring Program from 2021-2023. They are presently a Human Factors Engineer at Southwest Airlines in Dallas, Texas, working to improve procedures and safety for airline employees.

11:00 AM - 12:00 PM

PANEL: Queer Creative Storytelling - Be Zilberman, Léa Demeule

Be Zilberman is a non-binary, neurodivergent, and disabled multi-artist. Founder of Purpurina Films and Productions, which produces films, shows, and festivals aimed at LGBTQIA+ and disabled audiences. They hold a degree in Audiovisual Arts from the University of São Paulo and recently completed a Master's in Documentary Filmmaking at Goldsmiths, University of London. Currently, Be is finishing a second degree in Computer Science, also at the University of São Paulo, with a focus on AI and diversity.

Léa Demeule is a PhD student in Artificial Intelligence at Mila and University of Montreal; They are also an interdisciplinary artist. Their academic research explores fundamental questions in the interplay between discrete and continuous neural representations. They aim to develop methods that promote more careful use of computational resources and serve a socially beneficial purpose. Their art spans visual media, auditory media, web media and interactive media. Léa is especially interested in leveraging their position to curiously and critically engage with artificial intelligence and its ramifications. https://leademeule.com/

12:00 PM - 1:00 PM BREAK: lunch time!

1:00 PM - 2:00 PM PANEL: AI and Data Governance - A. Feder Cooper, Bernardo Fico, Serena Oduro,

A. Feder Cooper - I am a co-founder of the GenLaw Center and future professor of computer science at Yale (starting 2026). Until then, I am a postdoctoral researcher at Microsoft Research and a postdoctoral affiliate at Stanford. I work on reliable measurement and evaluation of machine learning systems. My contributions span uncertainty estimation, privacy and security of generative-AI systems, distributed training, hyperparameter optimization, and model selection. I also do work in tech policy and law, and spend a lot of time finding ways to effectively communicate the capabilities and limits of AI/ML to interdisciplinary audiences and the public.

Bernardo Fico has a Bachelor's in Law from the University of São Paulo, a Master's degree in International Human Rights Law from the Northwestern Pritzker School of Law (USA), a specialist in Digital Law from UERJ, certified by the International Association of Privacy Professionals (IAPP), and a researcher in the areas of technology and human rights. His also holds concentration and improvement courses in Data Protection (FGV-SP), Digital Law Practice (FGV-SP), Media Policy (Annenberg-Oxford), International Law (OAS), Human Rights (Stanford) and (Luzern), LGBTQIAP+ Rights (Clacso). In addition to his training in technology, Bernardo has extensive international experience, particularly with human rights and diversity, having worked at the Inter-American Court of Human Rights advising on the drafting of sentences and provisional measures, in addition to his work at the International Lesbian and Gay Associatiation (ILGA) and at Bluhm Legal Clinic defending social minorities with a focus on strategic litigation in international forums such as the United Nations.

2:00 PM - 3:00 PM COWORKING SESSION #1: AI and Data Governance

3:00 PM - 3:30 PM BREAK: coffee / snacks!

3:30 PM - 4:30 PM COWORKING SESSION #2: AI and Data Governance

4:30 PM - 5:00 PM CLOSING TALK

Accepted Submissions

  • Embracing Queer and Crip Complexity in Machine Learning: Dirty Resilience and Sweaty AI
    Gopinaath Kannabiran, Sacha Knox

  • Gender Trouble in Word Embeddings: A Critical Audit of BERT Guided by Gender Performativity Theory
    Franziska Sofia Hafner

  • The Queer Algorithm
    Guillaume Chevillon

  • OPA: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning

    Harish Karthikeyan, Antigoni Polychroniadou

  • Armadillo: Robust Secure Aggregation for Federated Learning with Input Validation
    Yiping Ma, Yue Guo, Harish Karthikeyan, Antigoni Polychroniadou

  • Community Content Moderation
    Jennifer Chien, Aaron Broukhim, Maya Mundell, Andrea Brown, Margaret Roberts

  • Hybrid Context Retrieval Augmented Generation Pipeline: LLM-Augmented Knowledge Graphs and Vector Database for Accreditation Reporting Assistance
    Candace Edwards

  • Depictions of Queer Mental Health by Grok-2
    Declan Grabb

  • The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models
    Anaelia Ovalle, Krunoslav Lehman Pavasovic, Louis Martin, Luke Zettlemoyer, Eric Michael Smith, Kai-Wei Chang, Adina Williams, Levent Sagun

  • Mitigating Bias in Queer Representation within Large Language Models: A Collaborative Agent Approach
    Tianyi Huang, Arya Somasundaram

Queer in AI @ NeurIPS 2024: Organizers

  • Jaidev Shriram (he/him) jkariyatt@ucsd.edu: Jaidev is a second-year MS student in Computer Science at the University of California, San Diego. His research interests lie in the intersection of computer vision and HCI, focusing on interdisciplinary problems. Specifically, he explores the application of modern generative 2D/3D techniques to develop novel user experiences. He has previously helped organize the Queer in AI workshop at NeurIPS 2022 and 2023.

  • Sarthak Arora (he/any) sarthakvarora@gmail.com: Sarthak is a Climate ML researcher working on problems around wildfires, river remediation and powerplant emissions using Computer Vision algorithms. At Queer in AI, Sarthak has helped organize multiple workshops and socials, while also focusing on policy research around AI harms.

  • Megan Richards (she/her) meganrichards.research@gmail.com: Megan is a Computer Science PhD student at New York University Courant Institute for Mathematical Sciences. Her work focuses on reliable machine learning, with the goal of making models more consistent, representative, and fair. 

  • Arjun Subramonian (they/them) arjunsub@cs.ucla.edu: Arjun is a PhD candidate at the University of California, Los Angeles. They research inclusive and critical approaches to graph learning and natural language processing, including fairness, justice, and ethics. They have been a Queer in AI core organizer for four years, organizing workshops and socials at various ML conferences.

  • Sharvani Jha (she/her) sharvanijha@ucla.edu Sharvani is a software engineer at Microsoft. She got her B.S. in Computer Science from UCLA in 2021. She enjoys queer community building, from founding a queer collegiate hackathon to organizing socials + workshops for Queer in AI.

  • Iheb Belgacem (he/him): Iheb is a Research Engineer at Sony. He earned his Master’s degree in Electrical Engineering from TU Munich and CentraleSupélec. His research focuses on diffusion models and 3D human modeling.

  • Michelle Lin (she/her): Michelle is a Research Assistant and Masters student at the University of Montreal and Mila - Quebec AI Institute. Her research uses deep learning, remote sensing, and computer vision for climate change mitigation applications. At Queer in AI, she helps organize workshops and events.

  • Yanan Long (he/they): Yanan is a research data scientist at the University of Chicago. His research spans applied Bayesian statistics, geometric deep learning, natural language processing and AI ethics, with a thematic focus on biomedicine and healthcare. At Queer in AI, he has been a core organizer for a year and contributes to organizing workshops at multiple ML/AI workshops.

  • Ruchira Ray (she/they) (website link) is an Applied AI Researcher at rStream, where she develops AI systems for on-site waste detection and sorting. Her interests span robotics perception, especially vision and audio, and the social implications of robotics.  She has previously worked on audio anti-spoofing at Samsung, controlled data generation at EPFL and social impacts of robotic automation in households at UT Austin. Ruchira enjoys organising workshops for Queer in AI and Queer in Robotics.

  • Vishal Dey (he/him) dey.78@buckeyemail.osu.edu: Vishal is a Ph.D. candidate in Computer Science and Engineering at The Ohio State University. His research interests primarily include Transfer Learning, Ranking and AI for Science with applications in molecular machine learning and drug discovery. He is passionate about leveraging AI for social good and fostering a more socially inclusive scientific community. He was one of the organizers of the Queer in AI workshops at NeurIPS 2023 and ICML 2024.

  • Bruna Bazaluk (she/her) (bazaluk [at] ime.usp.br) Bruna is a Masters student at University of São Paulo, in Brazil. She got her Bsc in Computer Science from the same institution in 2022. Her research focus is the intersection between Causal Inference and Large Language Models.

  • Ankush Gupta (he/they) ankushg0405@gmail.com Ankush is a final-year B.Tech student in Computer Science at IIIT-Delhi. He focuses his research on Human-Centered AI and edge computing. His research involves utilising deep learning and natural language processing to develop algorithms that improve human interaction and mitigate user biases to make AI systems more inclusive and effective.