Yes, San Francisco is a nexus of artificial intelligence innovation, but it’s also one of the queerest cities in America. The Mission District, where ChatGPT maker OpenAI is headquartered, butts up against the Castro, where sidewalk crossings are coated with rainbows, and older nude men are often seen milling about.
And queer people are joining the AI revolution. “So many people in this field are gay men, which is something I think few people talk about,” says Spencer Kaplan, an anthropologist and PhD student at Yale who moved to San Francisco to study the developers building generative tools. Sam Altman, the CEO of OpenAI, is gay; he married his husband last year in a private, beachfront ceremony. Beyond Altman—and beyond California—more members of the LGBTQ community are now involved with AI projects and connecting through groups, like Queer in AI.
Founded in 2017 at a leading academic conference, a core aspect of Queer in AI’s mission is to support LGBTQ researchers and scientists who have historically been silenced, specifically transgender people, nonbinary people, and people of color. “Queer in AI, honestly, is the reason I didn’t drop out,” says Anaelia Ovalle, a PhD candidate at UCLA who researches algorithmic fairness.
But there is a divergence between the queer people interested in artificial intelligence and how the same group of people is represented by the tools their industry is building. When I asked the best AI image and video generators to envision queer people, they universally responded with stereotypical depictions of LGBTQ culture.
Despite recent improvements in image quality, AI-generated images frequently presented a simplistic, whitewashed version of queer life. I used Midjourney, another AI tool, to create portraits of LGBTQ people, and the results amplified commonly held stereotypes. Lesbian women are shown with nose rings and stern expressions. Gay men are all fashionable dressers with killer abs. Basic images of trans women are hypersexualized, with lingerie outfits and cleavage-focused camera angles.
How image generators depict humans reflects the data used to train the underlying machine learning algorithms. This data is mostly collected by scraping text and images from the web, where depictions of queer people may already reinforce stereotypical assumptions, like gay men appearing effeminate and lesbian women appearing butch. When using AI to produce images of other minority groups, users might encounter issues that expose similar biases.
Understanding the Depiction of Queer People through Generative AI
In recent years, generative artificial intelligence (AI) has made significant advancements in various fields, including art, music, and literature. One area that has garnered attention is the depiction of queer people through generative AI. This technology has the potential to shape how queer individuals are represented in media, but it also raises important questions about ethics, biases, and the impact on the LGBTQ+ community.
Generative AI refers to algorithms that can create new content, such as images, videos, or text, based on patterns and data it has been trained on. These algorithms learn from vast amounts of existing data to generate new outputs that mimic the style or characteristics of the training data. When applied to the representation of queer people, generative AI can create images or stories that reflect the diversity and experiences of the LGBTQ+ community.
One of the benefits of using generative AI in depicting queer people is the potential for increased representation. Historically, queer individuals have been underrepresented or misrepresented in mainstream media. Generative AI can help address this issue by creating content that showcases a broader range of queer identities and experiences. By training AI models on diverse datasets that include queer voices and stories, we can ensure that the generated content reflects a more accurate representation of the LGBTQ+ community.
However, there are also concerns about biases and ethical considerations when using generative AI to depict queer people. AI algorithms learn from existing data, which means they can inherit biases present in the training data. If the training data is limited or biased, it can lead to inaccurate or harmful representations of queer individuals. For example, if the training data predominantly consists of stereotypes or negative portrayals, the generated content may perpetuate harmful narratives or reinforce stereotypes.
To mitigate these risks, it is crucial to ensure that the training data used for generative AI models is diverse, inclusive, and representative of different queer identities and experiences. This requires careful curation and validation of the training datasets to avoid perpetuating harmful biases. Additionally, involving queer individuals and communities in the development and validation process can provide valuable insights and help identify potential biases or inaccuracies.
Another consideration is the impact of generative AI on the LGBTQ+ community itself. While increased representation can be empowering, it is essential to recognize that AI-generated content is not a substitute for real-life experiences or human perspectives. The LGBTQ+ community is diverse and complex, and relying solely on AI-generated content may overlook the nuances and lived experiences of queer individuals. It is crucial to use generative AI as a tool to complement and amplify authentic queer voices rather than replacing them.
Furthermore, it is important to approach generative AI with transparency and accountability. Users should be aware that the content they are consuming or sharing has been generated by AI algorithms. This transparency allows individuals to critically evaluate the content and understand its limitations. Developers and organizations working with generative AI should also be transparent about their training data sources, methodologies, and any biases that may exist.
In conclusion, generative AI has the potential to revolutionize the depiction of queer people in media by increasing representation and showcasing diverse experiences. However, it is essential to approach this technology with caution, considering the ethical implications, biases, and impact on the LGBTQ+ community. By ensuring diverse and inclusive training data, involving queer voices in the development process, and promoting transparency, we can leverage generative AI to create more accurate and empowering representations of queer individuals.