top of page

LLMs frequently present dominant viewpoints while ignoring alternative perspectives from minority parties, resulting in biases.

datA BIAS

The data used to train gen AI is biased from the beginning, or bias is introduced as the data is refined, both intentionally and unintentionally.

​

Data bias: systematic errors within datasets that can lead to skewed or unfair outcomes when used in machine learning

1.jpg
2.jpg

Example: Facial recognition models performing poorly on darker skin tones due to lack of diverse training data.

​

Biases in Artificial Intelligence: How to Detect and Reduce Bias in AI Models - ONIX Systems Blog

Example: Training data that associates certain jobs with a specific gender can lead the model to generate biased results.

​

Does AI Have a Bias Problem? - NEA Today

LLM (Large Language Model): a type of artificial intelligence  designed to understand and generate human language.

ALGORITHM BIAS

The algorithm or data structure for the LLM can foster bias.

fairness.jpg
datalabeling.jpg

Algorithm: a set of step-by-step instructions that a computer follows to perform a specific task or solve a problem. It's the "recipe" that guides the AI's decision-making and actions, enabling it to learn, analyze data, and make predictions. 

Example: Algorithms are not neutral. Algorithmic weightings prioritize certain patterns that may reinforce bias. They are a set of choices made by individuals.

​​​

When technology discriminates: How algorithmic bias can make an impact - CBC News​

​Example: Data labeling adds bias as well. Sentiment analysis models are trained with labels that reflect annotators’ personal opinions.

​

Generic Risks and Biases: Cognitive Bias Types -Inter-Parliamentary Union

USER-GENERATED BIAS

User bias occurs when AI learns from biased user-generated data and as people interact with the system. 

User-generated feedback loops: when a user feeds AI materials as examples for their inquiry, bias is introduced in mimicking those sources.

RTP.jpg
stereotypes.jpg

Example: Reinforcement learning from human feedback (RLHF) is a machine learning technique that uses human feedback to optimize machine learning models to self-learn more efficiently.

​

What is RLHF - Amazon Web Services

Example: Stereotypes are introduced through user prompts. For instance, if a user asks an LLM to generate content and specifies only gendered pronouns.

​​

Bias and Fairness in Large Language Models: A Survey - MIT Press Direct

this website was created without the use of artificial intelligence

bottom of page