Queer in AI and {Dis}ability in AI Workshop @ ICML 2024
Queer in AI and [Dis]ablity in AI are holding a joint workshop and socials at ICML 2024. We aim to act as a gathering space for queer and/or disabled folks to build community and solidarity while enabling participants to learn about key issues and topics at the intersection of AI, disability and queerness.
Date and Venue
Workshop Date: Monday, July 22, 2024
Location: The in-person portion of the workshop will be held in Stolz2 at the Messe Wien Exhibition Congress Center, Vienna, Austria. However, all panels and talks will be live-streamed as well.
Affinity Joint Poster Session: The poster session will be held jointly across all affinity groups from 4 pm to 5:30 pm at the foyer outside of Soltz 1-3.
In-Person Social:
Queer in AI is hosting an in-person social at ICML 2024! To join, please fill out this check-in form.
When: Monday, July 22, 20:00 CEST
Where: Felixx, Gumpendorfer Str. 5, 1060 Wien, Austria. There is outdoor seating. Queer in AI has not exclusively reserved the space. There will not be a drink ticket system, and Queer in AI will not reimburse drinks.
Schedule
Time | Event |
---|---|
9:30 AM | Welcome Address |
9:45 AM | Panel: Challenges of AI in HCI for Queer/Disabled Communities |
10:30 AM | Invited Talk: Queerness and Semiparametric Credible Inference |
11:15 AM | Networking Q/A |
11:30 AM | Sponsorship Networking |
12:00 PM | Lunch Break |
1:15 PM | Oral Presentations - 6 papers |
2:45 PM | Coffee Break |
3:15 PM | Oral Presentations - 2 papers |
3:45 PM | Closing Address |
3:50 PM | Sponsor Networking |
4:00 PM | Affinity Joint Poster Session |
Program Lineup
Panel: Challenges of AI in HCI for Queer/Disabled Communities
Panelists:
Robin Angelini (he/him) is a Deaf HCI PreDoc Researcher at TU Wien advised by Katta Spiel. His research revolves around the convergence of deaf technology, critical access, and emerging technologies. Specifically, he's interested in the relationship between the design and development of emerging technologies and their acceptance by the deaf community. In doing so, his research is dedicated to exploring deaf-centered approaches to technologies that have previously been designed without the wants and needs of the deaf community in mind.
Katta Spiel (they/them) is an Assistant Professor for 'Critical Access in Embodied Computing' at TU Wien. They research marginalised perspectives on embodied computing through a lens of Critical Access. Their work informs design and engineering supporting the development of technologies that account for the diverse realities they operate in. In their interdisciplinary collaborations with disabled, neurodivergent and/or nonbinary peers, they conduct explorations of novel potentials for designs, methodologies and innovative technological artefacts.
Vagrant Gautam (xe/they) is a computer scientist, linguist and birder. They are currently a computer science PhD candidate at Saarland University, where they work on measuring and improving the robustness of natural language processing (NLP) systems. They are interested in both the social and technical aspects of NLP, especially how NLP systems affect the lives of brown trans people like them. In their free time, they think a lot about their special interest: birds.
Naomi Saphra-Jones (they/she) is a research fellow at the Kempner Institute at Harvard University. They are interested in NLP training dynamics: how models learn to encode linguistic patterns or other structure and how we can encode useful inductive biases into the training process. Previously, they earned a PhD from the University of Edinburgh on Training Dynamics of Neural Language Models; worked at NYU, Google and Facebook; and attended Johns Hopkins and Carnegie Mellon University. Outside of research, they play roller derby under the name Gaussian Retribution, perform standup comedy, and shepherd disabled programmers into the world of code dictation.
Invited Talk: Queerness and Semiparametric Credible Inference
In this invited talk, Nathan will explain how his queerness influenced his academic career and shaped his research. He will talk about how concepts of queer experience and expression, such as intersectionality, fluidity, self-discovery, and gender/sex galaxies, map to technical concepts emphasized in his research, such as semiparametrics, non-stationarity, learning, and robustness / credibility / partial identification.
Speaker:
Nathan Kallus is an Associate Professor at the Cornell Tech campus of Cornell University in NYC and a Research Director at Netflix. Nathan's research interests include causal inference especially when combined with machine learning, the statistics of optimization under uncertainty, sequential and dynamic decision making, and algorithmic fairness. Nathan is a proud gay dad and uses he/him pronouns.
Twitter: @nathankallus Website: nathankallus.com
Oral Presentations
We have a very exciting line up of oral presentations from the authors of accepted papers.
1:15-1:30 pm Color is a third wheel in shape-texture bias in Vision Transformers. Vatsala Nema
1:30-1:45 pm Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words. <Anonymous>
1:45-2:00 pm Improving Location Awareness for Vision-Disabled People using Video Question Answering and Multi-modal Machine Learning. Ankit Gupta
2:00-2:15 pm Evaluating Anti-LGBTQIA+ Medical Bias in Large Language Models. Crystal Tin-Tin Chang and Neha Srivathsa.
2:15-2:30 pm DynaGraph: Dynamic Contrastive Graph for Interpretable Multi-label Prediction using Time-Series EHR Data. <Anonymous>
2:30-2:45 pm Leveraging Intelligent Tutoring Systems to Enhance Queer Art Representation and Learning. <Anonymous>
3:15-3:30 pm Creating MEDBASSC: A Miniature Evaluation Dataset For Testing Biases Against Same-Sex Couples. <Anonymous>
3:30-3:45 pm Bird's Eye View Based Pretrained World model for Visual Navigation. <Anonymous>
Accepted Papers
Improving Location Awareness for Vision-Disabled People using Video Question Answering and Multi-modal Machine Learning. Ankit Gupta
Color is a Third Wheel in Shape-Texture Bias in Vision Transformers. Vatsala Nema, Vineeth N. Balasubramanian
Leveraging Intelligent Tutoring Systems to Enhance Queer Art Representation and Learning
Bird's Eye View Based Pretrained World model for Visual Navigation
Evaluating Anti-LGBTQIA+ Medical Bias in Large Language Models. Crystal Tin-Tin Chang, Neha Srivathsa, Charbel Bou-Khalil, Akshay Swaminathan, Mitchell R. Lunn, Kavita Mishra, Roxana Daneshjou, Sanmi Koyejo
Beyond Binary Gender: Evaluating Gender-Inclusive Machine Translation with Ambiguous Attitude Words
Creating MEDLGBTQ+: A Miniature Evaluation Dataset for LGBTQ+-Themed Images
DynaGraph: Dynamic Contrastive Graph for Interpretable Multi-label Prediction using Time-Series EHR Data
Call for Contributions
We are excited to announce our call for contributions for the Queer in AI and [Dis]ablity in AI Workshop at ICML 2024. We are accepting research papers, extended abstracts, position papers, opinion pieces, surveys, and artistic expressions on issues of queerness and/or disability in machine learning and artificial intelligence. We also welcome contributions about general topics in artificial intelligence and machine learning authored by queer and/or disabled folks. Accepted contributions will be invited to present at the Queer in AI workshop during ICML 2024.
This workshop is non-archival, and we welcome work that has been previously published in other venues, as well as work-in-progress.
Submissions
Submission is electronic, using the OpenReview platform. All submissions should be anonymized. Please refrain from including personally identifying information in your submission.
All authors with accepted work will have FULL control over how their name appears in public listings of accepted submissions.
Formatting
You can either submit your work in research paper format or non-traditional format and media. Submissions need NOT be in English. This is to maximize the inclusivity of our call for submissions and amplify non-traditional expressions of what it means to be Queer in AI. There are no page limits.
Research paper format: If possible, please format the paper using the official ICML style files for LaTeX (available here). If using another typesetting system, this requirement is waived.
Non-research format: For this format, you can submit your work in the form of art, poetry, music, microblogs, tiktoks, or videos. You need to upload a PDF containing a summary or abstract of your work and a link to your work.
Important Dates
All deadlines are Anywhere on Earth.
Submissions open: April 15th
Submission deadline: June 21st (Extended)
Acceptance Notification: June 25th (Rolling review)
Camera-ready submissions due from accepted authors: July 5th
Workshop: July 22nd
Code of Conduct
Please read the Queer in AI code of conduct, which will be strictly followed at all times. Recording (screen recording or screenshots) is prohibited. All participants are expected to maintain the confidentiality of other participants.
Please also refer to the ICML 2024 code of conduct and the Queer in AI Anti-harassment policy. Any participant who experiences harassment or hostile behavior may contact the ICML Diversity and Inclusion co-chairs or the Conference HR Liaison, or contact the Queer in AI Safety Team. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult with you on any actions taken.
Organizers
Vishal Dey (he/him) dey.78@osu.edu: Vishal is a Ph.D. student in Computer Science and Engineering at The Ohio State University. His research interests primarily include AI for Science and transfer learning, focusing on molecular machine learning and drug discovery. He is passionate about leveraging AI for social good and fostering a more socially inclusive scientific community. He was one of the organizers of the Queer in AI workshop, NeurIPS 2023. He is actively involved in multiple DEI efforts as part of oSTEM outreach.
Maximilian Vötsch (they/them) maximilian.voetsch@univie.ac.at: Max is a Ph.D. student in Computer Science at the University of Vienna. Their research is mainly on unsupervised learning and optimization methods for ML. At Queer in AI, they help organize workshops and social events.
Michelle Lin (she/her) michelle.lin2@mail.mcgill.ca: Michelle is a Research Assistant and recent graduate of McGill University and Mila-Quebec AI Institute. Her research uses deep learning, remote sensing, and computer vision for climate change mitigation applications. At Queer in AI, she helps organize workshops and events.
Yanan Long (he/they) ylong@uchicago.edu: Yanan is a Research Scientist at the University of Chicago. Their research interests span applied Bayesian statistics, geometric deep learning, and the philosophy of AI, as well as their consequences for Healthcare. At Queer in AI, they are a core organizer and have helped organize workshops at NeurIPS, ICML, and ACL.
Contact Us
Please reach out to us with feedback and questions at queerinai@gmail.com.