Examining LGBTQIA+ Bias in Generative AI

Let's explore the complexities behind where and how deeply LGBTQIA+ biases exist in AI tools and literature today. Then look at solutions to curb further bias.

Published on Jul 03, 2023

AI experts, researchers, developers and entire organizations have devoted significant time and effort into addressing bias in AI in recent years. Much of their efforts have been geared toward reducing biases around race and gender. Tackling AI bias against the LGBTQIA+ community, which encompasses around 7% of the entire U.S. population, has also generated research but remains a unique challenge.  

Bias intervention here is important to reduce potential harms to the queer community driven by AI outputs. Decreasing rates of bias in these tools starts with getting members of the LGBTQIA+ involved earlier and with greater frequency during the development process, as well as fine-tuning model training data with more inclusive content. 

Let’s take a detailed look at some of the nuances of LGBTQIA+ biases in AI — from where it originates to what it looks like in practice to what can be done to address it. 

Algorithmic Fairness and the Logistics of Biases

The central idea behind the need to address any bias in AI is known as algorithmic fairness. Kevin McKee, a senior researcher Scientist at Google DeepMind, explains this concept in an interview with Codecademy

“Algorithmic fairness means ensuring that we do not develop AI systems that maintain or exacerbate social inequalities,” McKee said. “Auditing existing algorithms for bias, developing new systems to help ensure equitable outcomes, and talking with marginalized communities to understand their needs are all examples of work that falls under algorithmic fairness.”

McKee goes on to explain why the LGBTQIA+ community has been largely omitted from this field of study, which amounts to a combination of logistical, ethical and philosophical factors.

“LGBTQ+ people are often logistically excluded from fairness work when datasets fail to include information on sexual orientation and gender identity — often because data collectors do not realize that this can be important information to record,” said McKee. “Collection of data on sexual orientation and gender identity, often considered ‘sensitive information’ in legal frameworks, can also be ethically and legally precluded when knowledge of this sort of personal information threatens an individual’s safety or wellbeing.”

Globally, individuals in this community face discrimination and violence on a daily basis. AI researchers have to be careful not to accidentally contribute to that harm in the pursuit of algorithmic fairness (if someone is “outed” because their sensitive data is used to train an AI model, for example). Another challenge is that queerness is dynamic and can change depending on the context. “How effectively can we measure a concept that often defies measurement?” McKee asked.

These logistical hurdles are some of the main reasons why it’s been difficult to measure LGBTQIA+ biases in generative AI. But as this technology grows more widespread, it’s vital to achieve algorithmic fairness to mitigate outputs that may contribute to things like gender dysphoria. It’s also necessary to curb outputs that contain homophobia, the promotion of heteronormativity and the erasure of queerness, among other damaging elements. 

Intentional Language, Unintentional Consequences

In July 2020, two researchers from the University of Maryland set out to discover gender biases in coreference resolution with their study Toward Gender-Inclusive Coreference Resolution.

Coreference resolution in AI links textual references that can train AI models to real-world entities. For example, "John said he was going to the store." The word "he" is a reference to "John." However, AI can make unlicensed inferences that may harm individuals or groups. This is especially true when resolving gender, as cues and societal stereotypes could lead to biases and unfair discrimination against people that do not identify purely as “he” or “she.”

The University of Maryland researchers looked at “ways in which folk notions of gender — namely that there are two genders, assigned at birth, immutable, and in perfect correspondence to gendered linguistic forms — lead to the development of technology that is exclusionary and harmful of binary and non-binary trans and cis people,” they wrote. 

They examined a random sample of 150 papers on computational linguistics and natural language processing and discovered that most studies mix up language-based gender with societal gender and assume that there are only two genders. They came across only one study that clearly talked about using “they/them” pronouns when determining who or what a sentence is referring to.

“We confirm that without acknowledging and building systems that recognize the complexity of gender, we build systems that lead to many potential harms,” their report read.

[Image made with Jasper Art]

Exactly two years later, a team of researchers at the University of Southern California doubled down on that study and set out to address the overlooked areas of the biases against the LGBTQIA+ community in their own study: Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large Language Models. The USC researchers introduced WinoQueer, a new dataset to measure whether LLMs encoded biases that are harmful to the LGBTQIA+ community. According to the researchers, it’s the first of its kind.

As part of this project, the team amassed 2,862,924 tweets from members of the LGBTQIA+ community and named the collection QueerTwitter. They also built QueerNews: 90,495 news articles about anti-trans bills and LGBTQIA+ identity. They theorized that “off-the-shelf” AI language models like Google’s BERT could show less bias if trained on language used by, and about, the queer community. 

Researchers found that BERT had a strong initial bias upon its release. But that bias could largely be reduced by tweaking the model slightly with voices of the community found in QueerTwitter, which backed their hypothesis. The also found that mainstream media language was more homophobic than that of queer Twitter users. So BERT trained with QueerNews still maintained biased outputs. 

GPT-3 was found to be biased in this way as well. One study found prompts with references to LGBTQIA+ identities were more harmful than heteronormative prompts, especially when adding ethnicity to those inputs. If unchecked, biases in language models that are growing in popularity could lead to consequences for queer individuals like suppression of concerns and histories of the LGBTQIA+ community, increased levels of online harassment, poorer mental health and the perpetuation of harmful stereotypes. 

It’s worth noting that both the GPT-3 and BERT models have been replaced by seemingly more efficient and less problematic successors, but the work is far from done. OpenAI’s technical paper on GPT-4’s release earlier this year read “We found that GPT-4-early and GPT-4-launch exhibit many of the same limitations as earlier language models, such as producing biased and unreliable content.”

A Misgendered Picture Is Worth A Thousand Woes

Automated gender recognition is a type of AI technology that uses machine learning algorithms to identify and classify someone's gender. This typically involves training an AI model on images or voices labeled with a specific gender — usually male or female. The models use physical traits like facial structure, hair, pitch of the voice and others to make their assumptions on gender. Once the model has been trained, it can then be used to predict the gender of subjects that it hasn’t seen yet.

This field is an increasingly important area of research for bias because generative AI models like OpenAI’s GPT-4 and the upcoming Gemini from Google are multi-modal. The models are capable of analyzing images and other forms of data outside of simply text that users input. AGR tools that only work in binaries of male and female pose risks for people outside of those distinctions.

“Identifying someone’s gender by looking at them and not talking to them is sort of like asking what does the smell of blue taste like,” AI and gender researcher Os Keyes told The Verge. “The issue is not so much that your answer is wrong as your question doesn’t make any sense.”

Keyes published the study The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition, where they studied 58 papers on AGR. Keyes discovered that AGR technology largely treats gender as an unchangeable and employs it in a way that excludes trans individuals, which carries greater risks for trans people that might encounter these systems (or whose data is used in an input in some capacity.)

In generative AI, this can mean the AI tools employing unchecked, biased AGR could consistently misgender images of trans people. They can also incorrectly label individuals presenting as anything other than explicitly heteronormative. This, much like biased LLMs, could increase the likelihood of spreading negative stereotypes and also puts trans people at greater risk of experiencing feelings of erasure (and promoting that erasure to others) or emotional stress caused by misgendering.

[Image made with Jasper Art]

How Can We Do Better?

As you can see, there is still a lot of room for improving LGBTQIA+ biases in generative AI models and tools. This technology is getting unbelievable levels of attention, use and investment. So the stakes are high, but so are the opportunities to get it right. 

Many experts, like Google DeepMind researcher Kevin McKee, agree that “the first step that we need is to engage and talk with these communities. Improving representation of the LGBTQ+ community in the tech industry is one way of achieving that. We frequently see situations where including team members who are queer (and who belong to other marginalized communities) helps to identify issues that would not have been caught otherwise.”

Enhanced research methods can help identify and reduce potential AI risks for queer communities. But prioritizing these risks and setting goals requires input from both experts and queer communities so their specific needs are understood.

“Engaging with the marginalized folks who might be affected by new AI systems can help us recognize what real-world harms look like and what technical solutions to develop,” McKee told Codecademy. 

AI researchers from University of Denver agree that user input is vital in recognizing and mitigating AI bias — but it doesn’t happen nearly enough. They analyzed 120 papers on gender specific biases in machine learning and AI systems and found that only eight secured user studies “to understand how users perceive, comprehend, and utilize these systems.” 

“There is great digital divide between the creators of ML/AI systems and the users who benefit from these systems,” they wrote. “User likeability and trust into ML/AI assisted decision-making system is equally or more important than the functionality and efficiency of the system.”

During the design phase of a model’s creation, Os Keyes suggests that AI model designers should consider whether it’s even necessary to incorporate gender into their creation. Is it possible to get desired outcomes without that qualifier? If gender distinctions are unavoidable, designers need to “ensure that the design includes space for users whose genders fall outside the binary and recognise the challenges that trans men and women face in spaces that are gendered according to default, ciscentric expectations,” Keyes said.

Another step is improving training data for large language models with material that’s more inclusive and that might also contain more of the vernacular of the affected groups. While not a stand-in for actually talking to members of the community, fine-tuning solutions like QueerTwitter can enhance off-the-shelf AI models with richer data to reduce LGBTQIA+ bias in outputs. 

“Systems applied directly ‘out of the box,’ without any modifications, learn from prior decisions and their effects,” McKee said. “That can include learning biases that affect minority communities. We’ll need to put in additional work to avoid ‘locking in’ bias and discrimination in these areas.”

While a model is being trained, it can be tested for bias with a technique known as adversarial learning: where two neural networks compete in a game-like setting to challenge a model's assumptions and expose its blind spots and biases. Benchmarks like WinoQueer or other tools can also test biases across a model. 

Achieving algorithmic fairness for the queer community requires care and effort, but it’s important that we collectively see that work through. No matter where our identity lies — across the intersectionality of queerness, race, religion or anything else — people are people. And all people deserve to plan, build, test, be represented in and benefit from this impressive technology, and kept safe from its potential harms.

Meet The Author:

Alton Zenon III

Jasper Content Marketing Manager

Want more of The Prompt? Sign up for the newsletter and never miss a story.

Receive a weekly recap of breaking news, insights, and interviews with experts in the world of generative AI.

Ready to create better content with AI?

Discover how 100,000+ copywriters and businesses are using AI content.
languages icon
Works in 30+ languages
fast signup icon
2 minute signup
star icon
Rated 4.8/5 stars in 10k+ reviews