ALG Blog 2: How Generative AI Works and How It Fails
Published on:
Generative AI and the Question of Fairness
News Article How Generative AI Works and How It Fails
Case Study Summary
In this case study, Narayanan and Kapoor explain how generative AI actually works and then explain where it goes wrong. For text, large language models are just predicting the next word based on patterns in huge amounts of internet data. That is why they can write essays, answer questions, or even sound like experts. But it also means they can be confidently wrong, since they are not thinking, only predicting. For images, models use a process called diffusion, where they take random noise and slowly remove it until a picture forms. It looks magical, but it is just pattern recognition at scale. The authors also point out the bigger issues. Generative AI can spread fake information, it can make deepfakes that hurt trust in media, and it relies on low paid workers to clean and filter the data. They argue that we need more transparency about data, stronger rules for companies, and better protections.
Discussion Topic I chosen: The use of creative work for training
I would say the ethics of this practice is unethical. These models are only strong because they are built on the work of journalists, writers, photographers, and artists, but most of those people never gave consent or got credit or pay. If I created something, I would not want it taken without my knowledge. That feels unfair, and it makes me think that the system is taking more than it is giving back to creators. I think people can push for more transparency from AI companies so we actually know what data is being used. Creators and publishers could also push for licensing agreements where their work can only be used if they agree and get compensated. Another option is to build new platforms or tools that only train on openly licensed or paid-for material, so creators have more control over what happens with their work. Other policies could focus on protecting the workers who filter harmful content, because many of them are underpaid and deal with difficult material every day, and they deserve fair pay and mental health support. I also think companies should be required to share more information about their training data and energy use so people know the real costs of these tools. Even though I still use AI to help me with drafts and ideas, I am becoming more cautious about which companies I support and how they treat both creators and workers.
New Discussion Question
If AI models need massive amounts of creative work to improve, should there be a system where creators automatically get paid when their work is used, similar to how in business you can get royalties for certain sales. Would this make generative AI more fair, or would it create too many barriers for innovation?
Why I chose it
I picked this question because I think fairness should mean creators actually benefit when their work is part of something bigger. Royalties are a common practice in business, so it makes me wonder if something like that could also work for AI.
Reflection:
Writing this blog made me think more about the human side of generative AI. At first I only saw it as a cool tool that can save me time, but now I see how it also depends on the work of creators and the effort of workers who often do not get recognized. I learned that it is not just about how the technology works but also about whether the system is fair to the people behind it. Going forward, I will still use AI for drafts and ideas, but I want to be more thoughtful about which tools I support and whether they are transparent and respectful of both creators and workers.
