BIAS Blog 1: Right to Fair Representation

Published on:

News Article
AI’s Regimes of Representation: A Community-Centered Study of Text-to-Image Models in South Asia

Case Study Summary

This case study looks at how text-to-image models represent South Asian people and cultures. It shows that AI can easily repeat old patterns of bias, especially when the people being represented are not the ones shaping how the technology works. The purpose is to show how important it is for communities to be part of building and testing these models so they reflect real experiences instead of stereotypes.

Answering Discussion Questions From Article

Question 1 Response:
To me, cultural representation means showing people and their experiences in a way that feels honest and real. It is about being seen for who you truly are, not through someone else’s bias or imagination. When I think about how AI represents cultures, I look for balance. If all the images of South Asia look the same, that tells me the model isn’t seeing people as individuals. Real representation should show diversity within the culture, not just the most common or stereotypical version of it.

Question 2 Response:
I think small, community-based evaluations are more powerful because they come from real people who understand their own culture. Big benchmarks can measure accuracy, but they cannot tell if an image feels respectful or authentic. A community can explain why something is wrong or harmful in ways that a number cannot. These smaller evaluations help uncover the emotional and social side of bias, which can make the research more human.

Question 3 Response: I believe AI can be made more inclusive globally if the people creating it take the time to include real voices from the cultures being represented. The article shows that text-to-image models often reflect Western or Indian-centered views of South Asia, but it also introduces the idea of a community-centered approach that could change that. If developers work directly with local artists, researchers, and community members, they can build systems that better understand cultural differences instead of repeating stereotypes. It will take effort, transparency, and shared decision-making, but I think AI can move closer to true global inclusivity if the people who are being represented have a real say in how the technology shows them.

Question 4 Response:
Developers could work directly with communities. That means inviting artists, photographers, and cultural experts to review AI outputs before releasing a model. They could fix patterns like using India as the default image for South Asia or showing only one type of lifestyle or clothing. Developers should also be transparent about how their data was collected and allow people to request changes or removals if something feels wrong. Testing AI with real communities before release should become a normal part of development.

Question 5 Response:
Culture is always changing, so AI needs to be flexible and updated often. Developers could update datasets regularly with help from people who actually live in those cultures. They could also have systems where people can give feedback or correct mistakes easily. Instead of trying to freeze culture, AI should learn that it evolves. That way, what the model shows will stay relevant and true to the people it represents.

Question 6 Response:
Older media, like film and television, often showed certain cultures through a narrow or biased lens. Those same mistakes can happen in AI if no one pays attention. The difference is that AI can spread those images much faster, which makes the harm bigger. We can learn from that history by being more careful now. If past media taught us that who tells the story matters, then in AI, who builds and trains the model matters just as much.

My new Discussion Question:

If AI systems continue to reflect cultural biases even after improvements, should we focus more on fixing the technology or on changing the global structures that shape the data it learns from?

I chose this question because the article made me realize that AI bias is not just a technical problem but comes from the world it is built on. Even if we improve the technology, it still learns from data that reflects real-world inequality. I wanted to ask whether the real solution is to make AI better at avoiding those biases or to focus on the deeper issues in society that create them in the first place.

Reflection:

I thought I had a good idea of how bias can exist in AI, but after reading this case, it suprised me with how deep and complex it can really get. I used to think bias mainly came from the data or how the models were trained, but this article helped me see that it also comes from the deeper inequalities in society that shape what stories are told and whose voices are heard. Bias can appear in subtle ways, like when certain cultures are shown inaccurately or when their full experiences are ignored. It also goes back to that idea of how much influence developers and companies have in making these models and how harmful/disrespectful it can be if it’s not fully accurate. This case study just continues to prove how important it is to include real people and voices from different backgrounds in the design process. That is the only way technology can represent people honestly instead of repeating the same limited view of the world.