Queer in AI @ ACL 2023

Queer in AI is hosting an affinity workshop (hybrid with virtual attendance) at ACL 2023 in Toronto, Canada.

Dates and Venue

Queer in AI Workshop: Sunday, July 9, 9am - 6pm EDT, Pier 9, The Westin Harbour Castle

Informal Social: Monday, July 10, 7pm EDT, O'Grady's on Church, 518 Church St, Toronto, ON

Lunch Social: Wednesday, July 12, 12:30pm - 2pm EDT, Dockside 3, The Westin Harbour Castle

Schedule

  • 9:00am - 9:30am: Introduction to Queer in AI and Initiatives

  • 9:30am - 10:30am: Panel: Labor Organizing and Queer Activism in Academia and Tech

  • 10:30am - 10:45am: Coffee break

  • 10:45am - 11:30am: Paper presentations (in person):

    • “ChatGPT, how do I know if I’m queer?” The Opportunities of Personalized AI Sex and Queer Educators and Supporters. Presenter: Nitay Calderon

    • Happy, #horny, and valid: A keyness analysis of bisexual discourses on Twitter. Presenter: Chloe Willis

    • Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models. Presenter: Eddie Ungless

  • 11:30am - 12:30pm: Keynote: Juan Vásquez on the state of NLP in Latin America

  • 12:30pm - 2:00pm: Lunch break

  • 2:00pm - 3:00pm: Paper presentations (virtual):

    • Queer People are People First: Deconstructing Sexual Identity Stereotypes in Large Language Models. Presenter: Harnoor Dhingra

    • Critical Technopolicy and Reflexivity For Proactive Gender-Inclusive NLP. Presenter: Alicia Boyd

    • Gender-Fair Post-Editing: A Case Study Beyond the Binary. Presenter: Manuel Lardelli

  • 3:00pm - 4:00pm: Keynote: Lex Konnelly on non-binary Language, healthcare, and AI

  • 4:00pm - 4:30pm: Coffee break with sponsors

  • 4:30pm - 5:00pm: Paper presentations (virtual):

    • Platform trans-inclusion: on pronoun fields and commodification. Presenter: Cedar Brown

    • “I’m fully who I am”: Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation. Presenter: Anaelia Ovalle

  • 5:00pm - 6:00pm: Social and/or lightning talks with sponsors

Accepted papers

  • Gender-Fair Post-Editing: A Case Study Beyond the Binary. Manuel Lardelli, Dagmar Gromann

  • Queer People are People First: Deconstructing Sexual Identity Stereotypes in Large Language Models. Harnoor Dhingra, Preetiha Jayashanker, Sayali Moghe, Emma Strubell

  • Happy, #horny, and valid: A keyness analysis of bisexual discourses on Twitter. Chloe Willis, Simon Todd

  • Critical Technopolicy and Reflexivity For Proactive Gender-Inclusive NLP. Davi Liang, Anaelia Ovalle, Arjun Subramonian, Alicia Boyd

  • “I'm fully who I am”: Towards Centering Transgender and Non-Binary Voices to Measure Biases in Open Language Generation. Anaelia Ovalle, Palash Goyal, Jwala Dhamala, Zachary Jaggers, Kai-Wei Chang, Aram Galstyan, Richard Zemel, Rahul Gupta

  • Platform trans-inclusion: on pronoun fields and commodification. Cedar Elwin Brown

  • Stereotypes and Smut: The (Mis)representation of Non-cisgender Identities by Text-to-Image Models. Eddie L. Ungless, Björn Ross, Anne Lauscher

  • “ChatGPT, how do I know if I'm queer?” The Opportunities of Personalized AI Sex and Queer Educators and Supporters. Nitay Calderon, Shir Lissak, Roi Reichart

Speakers and Panelists

Keynote: Non-Binary Language, Healthcare, and AI

Lex Konnelly (they/them, il/lui)

Lex Konnelly recently completed their Ph.D. in Linguistics and Sexual Diversity Studies at the University of Toronto, where they researched linguistic innovation and advocacy within transgender and gender-diverse communities and taught undergraduate courses in discourse analysis and language, gender, and sexuality. They’re currently enjoying a summer of Simply Vibing™️ with their favourite hobbies: gardening, fermenting up a storm, and playing Nintendo.

In this talk, Lex will discuss their research on non-binary Torontonians’ linguistic strategies in gender-affirming care settings, and identify some of the key implications for AI-supported healthcare interactions from a sociolinguistic, trans linguistic, and queer linguistic perspective.

LinkedIn: @lex-konnelly
Twitter: @lexicondk

Keynote: The state of NLP in Latin America

Juan Vásquez (he/him)

Juan is a queer master’s student at UNAM, Mexico. His research focuses on computational social sciences and neuro-symbolic natural language processing that makes use of computational formal semantics.

In this talk, Juan will present a general view of the field of NLP in Latin America, and some collective efforts by Latin American activists who are taking advantage of the tools developed by NLP to improve the material conditions of historically marginalized groups.

Panel: Labor Organizing and Queer Activism in Academia and Tech

Rhiannon Willow (she/her)

Rhiannon is a PhD Candidate in Physics at the University of Michigan. Motivated by the urgent need for renewable energy sources, Rhiannon uses ultrafast laser spectroscopy to study energy and charge transfer in photosynthetic proteins. Rhiannon grew up in rural Michigan, and she feels a deep connection to nature and the land. She believes that healing our collective disconnect from nature and the environment is one of many essential steps towards mitigating the ongoing climate catastrophe.

Rhiannon is an Autistic, disabled, and transsexual woman, and she's committed to building communities where disabled people and people of minoritized genders/sexes/sexualities are not merely accepted—but are cherished for our unique life experiences and wisdom, and ultimately loved simply for our humanity. Rhiannon is a member of the bargaining team of the grad labor union (GEO 3550) at the University of Michigan, where she's currently fighting for improved trans healthcare coverage.

In her free time, Rhiannon enjoys baking bread, roller skating, dancing, working on cars, healing generational trauma, and being a dyke in general. She looks forward to being a gay grandma!

Instagram: @rainbow.lasers

Vaivab Das (They/Them)

Vaivab Das is a Senior Research Fellow at the Indian Institute of Technology (IIT) Delhi and a visiting Fulbright Nehru Doctoral Research Fellow at UCLA. They have worked towards the recognition of diverse gender and sexual minorities as protected categories, building gender-affirming infrastructures, and creating community spaces for LGBTQIA+ sensitization and awareness in various educational institutions. They are interested in looking at the role of data cultures, law, gender and sexuality in the making of histories and policies for LGBTQIA+ persons in India.

LinkedIn: @vaivabdas
Instagram: @being.v.a.d
Twitter: @D_Vaivab

Kait Hoehne (she/her)

Kait Hoehne is a senior software engineer on the Games team at The New York Times. She was part of the organizing committee and is now a shop steward for the New York Times Tech Guild, and is passionate about making tech a welcoming, diverse place full of different voices and perspectives.

Email: kait.hoehne@nytimes.com

Michelle Alejandra Artiles Warrick (she/her)

Michelle Alejandra Artiles Warrick is a 22 year old girl, born and raised in Caracas, Venezuela. She is currently studying Social Communication at the Universidad Católica Andrés Bello and Social Work at the Universidad Central de Venezuela. Michelle is an activist for the human rights of trans people in Venezuela and Latin America, she focuses her activism through Girl Up Venezuela where she holds the position of president, from the Vice Rectorate of Identity, Student Development and Social Extension of the UCAB where she holds the position of coordinator of social action and through the NGO Human Kaleidoscope. She also works as a photographer for the newspapers El Estímulo and Cinco8.

Instagram: @migurtcita
Twitter: @migurtcita

About

The framing and use of common AI systems that interact with queer systems are often problematic, and inherently cisnormative and heteronormative. To counterbalance these risks, it is paramount to ensure that queer researchers are included in the study, development and evaluation of these systems. However, the Queer in AI demographic survey reveals that most queer scientists in our community do not feel completely welcome in conferences and their work environments, with the main reason being a lack of queer community and role models.

In the Queer in AI Workshop at ACL 2023, we want to bring together researchers and practitioners working at the intersection of linguistics, queerness, and natural language processing to present their work and discuss these issues. Additionally, we will provide a casual, safe and inclusive space for queer folks to network and socialize. We will have in-person and virtual components, so regardless of your physical location, we hope that you will be able to join us as we create a community space where attendees can learn and grow from connecting with each other, bonding over shared experiences, and learning from each individual’s unique insights into NLP/CL, queerness, and beyond!

At Queer in AI, we acknowledge that the voices of marginalized queer communities - especially transgender, non-binary, and queer BIPOC folks - have often been neglected in these spaces. For this reason, we are committed to featuring talks and panel discussions that are inclusive of non-Western, transgender and non-binary identities; as well as of Black, Indigenous, and Pacific Islander queer folks in the broader NLP and Computational Linguistics community.

Call for Contributions

We are excited to announce our call for contributions for the Queer in AI Workshop at the 2023 ACL Conference. We are accepting research papers, extended abstracts, position papers, opinion pieces, surveys, and artistic expressions on queer issues in NLP and Linguistics. We also welcome contributions about general topics in NLP and Linguistics authored by queer folks. Accepted contributions will be invited to present at the Queer in AI workshop during the 2023 ACL Conference.

This workshop is non-archival, and we welcome work that has been previously published in other venues, as well as work-in-progress. No submissions will be desk-rejected. 

We invite the submissions in the following tracks:

Submissions

Submission is electronic, using the OpenReview platform. All papers must follow the ACL Author Guidelines. All submissions should be anonymized. Please refrain from including personally identifying information in your submission.

All authors with accepted work will have FULL control over how their name appears in public listings of accepted submissions.

Formatting

You can either submit your work in research paper format or non-traditional format and media. Submissions need NOT be in English. This is to maximize the inclusivity of our call for submissions and amplify non-traditional expressions of what it means to be Queer in NLP/CL. There are no page limits.

Research paper format: For this purpose, paper submissions must use the official ACL style templates, which are available here (Latex and Word). Please follow the general ACL paper formatting guidelines available here

Non-research format: For this format, you can submit your work in the form of art, poetry, music, microblogs, tiktoks, or videos. You need to upload a PDF containing a summary or abstract of your work and a link to your work.

Mentorship

If you are writing a paper for the first time and need some help with your work, we strongly suggest you to join Queer in AI slack and contact the organizers there. One of us can help you and guide you on how to proceed.

If you are willing to help first-time authors, please feel free to indicate to us by joining the slack or emailing us.

Important Dates

All deadlines are Anywhere on Earth.

  • Submissions open: May 1st 

  • Visa-friendly submission deadline: May 12th

  • Visa-friendly acceptance notification deadline: May 15th 

  • Final submission deadline: June 11th

  • Final notifications of acceptance: June 23rd

  • Camera-ready submissions due from accepted authors: July 7th

We will open the call on May 1st and we will have TWO deadlines for submission - one is a visa-friendly submission deadline for folks that will require a visa to attend ACL in person. Note that we are currently NOT guaranteeing full support for the process of obtaining a visa, but we will work our hardest to provide as much support for this as we can. The visa-friendly deadline to submit to our workshop is May 12th AoE and the final deadline is June 11th AoE.

Acceptance notifications will go out on a rolling basis, and final notifications of acceptance will go out by June 23rd.

If you need help with your submission in the form of mentoring or advice, you can get in touch with us at queer-in-nlp@googlegroups.com.

Click here to submit (Abstract is required)

Contact Us

Email: queer-in-nlp@googlegroups.com

Code of Conduct

Please read the Queer in AI code of conduct, which will be strictly followed at all times. Recording (screen recording or screenshots) is prohibited. All participants are expected to maintain the confidentiality of other participants.

ACL 2023 adheres to the ACL Code of Conduct and Queer in AI adheres to Queer in AI Anti-harassment policy. Any participant who experiences harassment or hostile behavior may contact the ACL exec team, or contact the Queer in AI Safety Team. Please be assured that if you approach us, your concerns will be kept in strict confidence, and we will consult with you on any actions taken.

Call for organizers

Joining the organization team is a great way to meet queer folks and mentors in our field and understanding inclusivity issues. We need folks to help us organize our events and socials (EACL, ACL and EMNLP 2023). You don’t need any prior experience / background to join our team, everyone is welcome! Fill up our form here to join.

Organizers

Amanda Bertsch (she/her)

Amanda is a queer graduate student at Carnegie Mellon University. Her research focuses on machine learning for text generation, particularly for summarization. At Queer in AI, she helps organize events for NLP conferences. 

Arjun Subramonian (they/அவங்க)
Arjun is a brown queer PhD student at the University of California, Los Angeles. Their research focuses on inclusive and critical graph machine learning and natural language processing, as well as queer issues in machine learning. They help with Queer in AI workshops and advocacy efforts.

Connor Baumler (he/him)

Connor is a PhD student at the University of Maryland. His research focuses on fairness, interpretability, and trustworthiness in natural language processing. At Queer in AI, he helps organize events for NLP conferences.

Juan Vasquez (he/him)

Juan is a queer master’s student at UNAM, Mexico. His research focuses on computational social sciences and neuro-symbolic natural language processing that makes use of computational formal semantics.

Maria L. Pacheco (she/her)

Maria is a postdoctoral researcher at Microsoft Research NYC, and an incoming Assistant Professor at the University of Colorado Boulder, where she founded the Boulder Language and Social Technologies group. She is broadly interested in the intersection of natural language processing, social computing and computational social science. She is also an active organizer at Queer in AI and LatinX in AI. 

Maria Ryskina (she/they)

Maria is a postdoctoral researcher at MIT, where she investigates linguistic questions using tools from natural language processing, cognitive science, and neuroscience. At Queer in AI, she helps organize workshops and socials at NLP conferences and run the Graduate Application Aid Program, which supports queer scholars pursuing careers in STEM.

Pranav A (he/they)
Pranav is a brown queer Asian chaotic zoomer who is a research engineer at Dayta AI, Hong Kong. At QueerInAI, he organizes social events in NLP conferences and sets up DEI and safety initiatives.

Shane Storks (he/him)

Shane is a queer PhD student at the University of Michigan, where he is a member of the Situated Language and Embodied Dialog (SLED) group. His research focuses on physical reasoning in natural language understanding. At Queer in AI, he helps organize events for NLP conferences.

Yanan Long (he/they)

Yanan is a final-year PhD student at the University of Chicago. Their research spans computational Bayesian statistics, natural language processing and geometric deep learning, with a focus on biomedical applications and fairness. At Queer in AI they help with organizing workshops and socials.

Alissa Valentine (she/they)

Alissa is a PhD Candidate at Mount Sinai School of Medicine in NYC. Her favorite research project atm focuses on bias detection/mitigation within the clinical notes of Mount Sinai's health care system. At Queer in AI, she loves organizing conference events and workshops.

Helena Gómez-Adorno (she/her)

is a researcher at Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, UNAM. She finished her Ph. D. in Computer Science at the Centro de Investigación en Computación, IPN. Her research interests are in the field of natural language processing and text mining. She has worked on semantic similarity, authorship attribution, author profiling, and text classification problems. She is a current member of the Mexican National System of Researchers of CONACYT Level 1.