Queer in AI @ NeurIPS 2023!
Queer in AI’s workshop and socials at NeurIPS 2023 aim to act as a gathering space for queer folks to build community and solidarity while enabling participants to learn about key issues and topics at the intersection of AI and queerness.
Date and Venue
Workshop Date: Monday, December 11, 2023
Location: The in-person portion of the workshop will be held in room R06-R09 at the New Orleans Ernest N. Morial Convention Center. However, all panels and talks will be virtual and will be live-streamed as well.
In-person Social (Tuesday, Dec 12): Qiqi Bar, 2021 Foucher street, New Orleans - 5:45 - 9pm
Schedule
09:30 - 09:45 Welcome Address
09:45 - 10:45 Panel on Acephobia
10:45 - 11:15 Talk on ‘Queer Desires and the Indian Arts’
11:15 - 12:00 Sponsor Booth Coffee Break (In-Person)
12:00 - 13:15 Lunch Break
13:15 - 14:00 Panel on Generative AI and Biases
14:00 - 14:30 Talk on ‘Designing Trans Inclusive Medical Machine Learning Datasets and Models’
14:30 - 14:45 Feedback(s) and Closing Remarks
14:45 - 15:30 Sponsor Networking Session (In-Person)
15:30-Onwards Joint Affinity Poster Session (In-Person)
*All times are in Central Standard Time (GMT-6)
Speakers and Panelists
Talk on ‘Queer Desires and the Indian Arts’
Title: Icon(s) of Resistance: Queer Desires and the Indian Arts (1990-2010)
Speaker: Satyam Yadav (he/him)
Satyam Yadav (b.1999) recently graduated with a Master's in Gender Studies from SHS, Ambedkar University Delhi. He has previously had experience working and curating in several art institutions in India. Academically, he is interested in questions of the political economy, contemporary arts and art institutions, and visual cultures of sexuality in relation to the wider philosophical traditions of South Asia. He is happiest living and working in Delhi, India.
Talk on ‘Designing Trans Inclusive Medical Machine Learning Datasets and Models’
Title: Designing Trans Inclusive Medical Machine Learning Datasets and Models: Challenges and Opportunities
Speaker: Maggie Delano (they/them)
Maggie Delano is an Assistant Professor of Engineering at Swarthmore College. Their research focuses on the development of inclusive medical technologies, with an emphasis on wearables and machine learning. They have published multiple articles related to gender and machine learning, focusing on how the use of sex and gender-related variables in modern medicine is exclusionary of trans and nonbinary people. They served as a reviewer for the NeurIPS Datasets and Benchmarks track for its first two years. Prof. Delano received their Ph.D. in electrical engineering and computer science with a minor in women's and gender studies from MIT in 2018. Further information about Prof. Delano's work can be found on their website.
Panel on Acephobia
Panelists:
Sarah Cosgriff (she/they)
Sarah works in EDI in education and science communication in the UK. Her asexual activism and advocacy work focuses on raising the visibility of asexuality within STEM sectors. She has done this through social media campaigns, engaging talks and by co-founding Aces in STEM, an internationally reaching digital space for people on the asexual spectrum who work in STEM sectors or study STEM subjects. They also produce and present on the podcast Queer Cuz, a family run podcast which focuses on the experiences of LGBTQIA+ Filipinos.
Yasmin Benoit is a British model, asexual activist, writer, speaker, consultant and researcher at Stonewall. Described as the "unlikely face of asexuality" by Cosmopolitan Magazine, she quickly became a leading voice for the community after publicly coming out in 2017. She started the #ThisIsWhatAsexualLooksLike movement for diverse asexual visibility and representation and co-founded International Asexuality Day (April 6). She won 'Campaigner of the Year' at the Rainbow Honours Awards in 2022, and was the first asexual grand marshal at NYC Pride 2023. This year, she released the UK's first report into asexual experiences and discrimination in partnership with Stonewall, as part of their asexual rights initiative.
Dr. Pragati Singh is a health professional, researcher, social entrepreneur, and an internationally renowned sexual and reproductive health and rights changemaker. She is known for her unique initiatives in niche fields, such as Indian Aces: India’s first initiative working towards asexuality, HumansOfQueer.com: a platform for LGBT+ people’s stories, PanACEa: Asexuality Asia Conference; PLatonicity.co: Matchmaking for nonsexual alliances, and more. She has been recognized globally with numerous awards and prizes, and was also featured in the BBC's list of 100 most inspiring, innovative, and influential women from around the world in 2019. Her works have been published internationally and she is now working on her first book: Asexual Lives. See more at DrPragatiSingh.com.
Ela is Associate Professor and Graduate Director in English and core faculty in Women’s, Gender, and Sexuality Studies at Illinois State University. She is the author of Asexual Erotics: Intimate Readings of Compulsory Sexuality (2019) and Ungendering Menstruation (forthcoming), as well as an editor of On the Politics of Ugliness (2018). Ela is a founding and managing editor of the peer-reviewed, open access journal Feral Feminisms.
Umut Pajaro
Umut Pajaro Velasquez has a BA in Communications, and an MA in Cultural Studies. Currently, work as a researcher on issues related to digital rights, the ethics and governance of AI, focusing on finding solutions to biases towards gender, race, and other forms of diversity that are often excluded or marginalized in the constitution of the data that feeds these technologies.They are the Chair of the Gender Standing Group and is also the coordinator of YouthLACIGF and Youth IGF Colombia. Also they are an advocator for queer rights and a poet
Moderator: Hetvi
Hetvi (ze/they) is a PhD student at Imperial College London affiliated with the StatML CDT, and a core organizer at Queer in AI. Their research currently focuses on Bayesian models of mutations in immune cells.
Panel on Generative AI and Biases
Panelists:
Alex Hanna
Dr. Alex Hanna is Director of Research at the Distributed AI Research Institute (DAIR). A sociologist by training, her work centers on the data used in new computational technologies, and the ways in which these data exacerbate racial, gender, and class inequality. She also works in the area of social movements, focusing on the dynamics of anti-racist campus protest in the US and Canada. Dr. Hanna has published widely in top-tier venues across the social sciences, including the journals Mobilization, American Behavioral Scientist, and Big Data & Society, and top-tier computer science conferences such as CSCW, FAccT, and NeurIPS. Dr. Hanna serves as a Senior Fellow at the Center for Applied Transgender Studies, and sits on the advisory board for the Human Rights Data Analysis Group and the Scholars Council for the UCLA Center for Critical Internet Inquiry. She is a receipient of the Wisconsin Alumni Association's Forward Award, has been included on FastCompany's Queer 50 and Go Magazine's Women We Love lists, and has been featured in the Cal Academy of Sciences New Science exhibit, which highlights queer and trans scientists of color. She holds a BS in Computer Science and Mathematics and a BA in Sociology from Purdue University, and an MS and a PhD in Sociology from the University of Wisconsin-Madison.
Morgan Klaus Scheuerman (he/him)
Morgan Klaus Scheuerman is a Postdoctoral Associate in Information Science at University of Colorado Boulder and a 2021 MSR Research Fellow. His research focuses on the intersection of technical infrastructure and marginalized identities. In particular, he examines how gender and race characteristics are embedded into algorithmic infrastructures and how those permeations influence the entire system. His work has received multiple best paper awards and honorable mentions at CHI and CSCW. He earned his MS degree in Human-Centered Computing from University of Maryland Baltimore County and his BA in Communication & Media Studies (Minor Gender & Sexuality Studies) from Goucher College.
Katy Felkner (she/her)
Katy Felkner is a fourth-year PhD candidate at the University of Southern California Information Sciences Institute. Her research focuses on fairness and bias in large language models, with particular emphases on benchmark development and community-engaged methods. She is supported by an NSF Graduate Research Fellowship (GRFP) and advised by Jonathan May. Before USC, Katy graduated summa cum laude from the University of Oklahoma with dual bachelor’s degrees in computer science and Letters. After grad school, her goal is to be a professor of computer science, advancing research in AI ethics and fairness ,advocating for equitable public policy around AI, and serving as a visible role model and mentor to women and LGBTQ+ students in computer science.
Su Lin Blodgett
Su Lin Blodgett is a researcher in the Fairness, Accountability, Transparency, and Ethics (FATE) group at Microsoft Research Montréal. Her research examines the ethical and social implications of language technologies, focusing on the complexities of language and language technologies in their social contexts, and on supporting NLP practitioners in their ethical work. She completed her Ph.D. in computer science at the University of Massachusetts Amherst, and has been named as one of the 2022 100 Brilliant Women in AI Ethics.
Moderator:
Anaelia Ovalle (they/them)
Anaelia Ovalle is an Afro-Caribbean, queer, and non-binary PhD candidate in Computer Science at the University of California, Los Angeles. Advised by Prof. Kai-Wei Chang, Anaelia’s research centers the interfacing of ML design, algorithmic fairness objectives, and the broader socio-technical milieu. With particular emphasis on historically marginalized communities, their research synergizes across both algorithmic fairness and critical social theory to mitigate AI-driven sociotechnical harms at 2 resolutions: (1) inclusive natural language processing and representation learning that center minority populations (e.g. what does it mean to center trans communities in the context of a language model?) and (2) expanding AI ethics research praxis (e.g. what barriers exist to strengthening the connection between AI fairness frameworks and who they are intending to serve?). In their free time, Elia enjoys riding motorcycles to serene remote locations.
Accepted Papers
VALET: Vision-And-LanguagE Testing with Reusable Components
Eric Slyman, Kushal Kafle, Scott CohenFair Machine Learning for Healthcare Requires Recognizing the Intersectionality of Sociodemographic Factors, a Case Study
Alissa Andrea Valentine, Alexander W Charney, Isotta Landi
Gender as a Feature? An Industry Discussion
Luke BovardUsing Reinforcement Learning Algorithms to Mitigate Gender Bias
Victor AshioyaHierarchical Relationships: A New Perspective to Enhance Scene Graph Generation
Bowen Jiang, Camillo TaylorExploring GPT-3.5's Counterspeech Detection Abilities Through Fine-Tuning and Few-Shot Prompting
Call for Contributions
Submission link: Openreview
Submission format
Queer in AI is excited to extend a call for contributions to our NeurIPS 2023 workshop! We encourage submissions about the intersection of AI and queerness, as well as research conducted by queer individuals. We welcome submissions of various formats, including but not limited to research papers, extended abstracts, position papers, opinion pieces, surveys, and artistic expressions. Authors of accepted works will be invited to present their work at the Queer in AI workshop at the NeurIPS 2023 conference.
Submission deadlines
All deadlines are Anywhere On Earth (AoE).
Visa-friendly submission deadline: Jul 10 2023
Visa-friendly notification deadline: Jul 20 2023
Final submission deadline: Oct 17 2023 (Deadline Extended)
Final notification deadline: Nov 10 2023
We intend to review submissions on a rolling basis to provide greater flexibility in travel planning.
Submission logistics
The workshop is non-archival - we welcome submissions that have been previously submitted to other conferences + venues, as well as works in progress. No submissions will be desk-rejected.
All submissions must follow the NeurIPS Author Guidelines.
All submissions must be anonymized; please refrain from using personally identifying information in your submission.
All authors with accepted works will have FULL control over how their name will appear in public listings of accepted submissions.
Submissions do NOT need to be in English.
Please submit a PDF of your work. If you are submitting work in a non-traditional format (ex. art, poetry, music, tiktoks), please submit a PDF with an abstract as well as a link to your work.
Note that we CANNOT guarantee full support for those who may need a visa for the conference, but we will try our best to help however we can.
Email queerinai@gmail.com with questions, comments, concerns, or anything else that comes up.
Code of Conduct
Please read the Queer in AI code of conduct, which will be strictly followed at all times. Recording (screen recording or screenshots) is prohibited. All participants are expected to maintain the confidentiality of other participants.
Information about NeurIPS safety team will be added soon. In case you need assistance before the conference with matters pertaining to the Code of Conduct or harassment, please contact the Queer in AI Safety Team. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult with you on any actions taken.
Organizers
Jaidev Shriram (he/him) jkariyatt@ucsd.edu: Jaidev is a second-year MS student in Computer Science at the University of California, San Diego. His research interests lie in the intersection of computer vision and HCI, focusing on interdisciplinary problems. Specifically, he explores the application of modern generative 2D/3D techniques to develop novel user experiences. He has previously helped organize the Queer in AI workshop at NeurIPS 2022.
Sarthak Arora (he/him) sarthakvarora@gmail.com: Sarthak is a Statistics graduate from Ramjas College, University of Delhi. His interests lie primarily at the intersection of Data Science and its application in otherwise little explored avenues of Ethics, Environment, Politics and Art in creating intuitive and impactful models of Automation. Currently he is conducting research on Fire risk Assessment using AI/ML at UC Berkeley, and working on the Climate SDG Project at the AI for Good Foundation.
Yanan Long (he/they) yanan.long439@gmail.com: Yanan is a final-year PhD student in Chemistry at the University of Chicago. Their research interests span multiple areas of AI: Bayesian statistics, geometric deep learning and natural language processing, as well as the application of these tools to problems in the natural sciences and in fairness/ethics. At Queer in AI they help with organizing workshops and socials.
Sharvani Jha (she/her) sharvanijha@ucla.edu: Sharvani is a software engineer at Microsoft. She has a B.S. in computer science from UCLA and worked to co-found QWER Hacks + the outreach initiative of ACM AI @ UCLA. She enjoys the chaotic fun of organizing workshops + coordinating publicity for Queer in AI.
Ruchira Ray (she/her) ruchiraray@utexas.edu: Ruchira is a Master's student in Computer Science at the University of Texas at Austin. Her research interests revolve around multimodal learning, particularly the fusion of vision and speech/audio, and also its application for robotics. Currently, she is researching on controlled data generation at the Visual Intelligence and Learning Lab at EPFL. Ruchira enjoys organizing workshops and engaging in outreach efforts for Queer in AI.
Megan Richards (she/her) meganrichards@meta.com: Megan is currently an AI Resident at Meta AI and is based in New York City. Her research focuses on real-world reliability of machine learning systems, including fairness, out-of-distribution generalization, and robustness topics in vision/vision-language models. She also has experience building machine learning systems in healthcare, and is passionate about making AI a more representative and inclusive discipline.
Shujun Xiong (they/them) xiong.jeffrey314@gmail.com: Shujun is an undergraduate at Columbia University. Their work is primarily in theoretical neuroscience and AI, developing explainability tools for both the brain and machine learning models. They are passionate about AI justice and tools to fight black-box algorithmic governance.
Vishal Dey (he/him) dey.78@osu.edu: Vishal is a Ph.D. student in Computer Science and Engineering at The Ohio State University. His research interests primarily include transfer learning and graph representation learning, with a particular focus on molecular machine learning and cheminformatics. He aims to transform and accelerate the drug discovery process using AI. He is passionate about leveraging AI for social good and fostering a more socially inclusive scientific community.
Contact Us
Email: queerinai@gmail.com