ALG Blog 1: The right to Be an Exception to Data Driven Rules
Published on:
Why Expections Matter in a Data Driven World
News Article The Right to Be an Exception to a Data Driven Rules
Case Study Summary
This case study by Sarah H. Cen and Manish Raghavan talks about how data driven rules are being used in important decisions like hiring, lending, and criminal justice. The main point is that people should have the right to be an exception to these rules. They argue that we need to judge algorithms not just on accuracy but also on how individualized they are, how much uncertainty they have, and how much harm they can cause.
Exploring the Ideas Behind Data Driven Rules
Understanding Data Driven Rules and What It Means to Be an Exception — A data driven rule is basically the logic an algorithm follows to take in information and give a prediction or outcome. For example, a hiring algorithm might use GPA, work experience, and keywords on a resume to decide who moves on in the process. A data driven exception is someone who does not fit what the model expects, even though they may be qualified or deserving. An exception is not always the same as an error. An error can be proven by comparing the outcome to reality, but being an exception just means the rule fails to capture the full truth of that person’s situation. For example, a loan applicant with unusual but stable income might be rejected because the model does not recognize their work type. That is not necessarily an error in the system, but it still fails that person in a way that matters.
How Algorithmic Decisions Differ from Human Judgment
One thing I notice is scale. Data driven systems are applied to thousands of people at once, while human judgment is slower and more varied. If an algorithm rejects you for a job, and every company uses that same algorithm, then you can be locked out of opportunities in a way that is systematic. With humans, even if one person doubts you, another might see your potential. Another difference is adaptability. Humans can hear context in the moment and weigh things that were not expected. Algorithms are limited to the features they were trained on. If something about your life matters but was not in the data, the system ignores it. I also think about transparency. With a human, you might at least get a reason for the decision. With algorithms, retraining can change outcomes silently, and people often have no idea why something happened. I see this even in smaller ways, like automated grading tools in school. Sometimes a program marks an answer wrong even if the logic was right, and unless a professor reviews it, there is no explanation or chance to argue.
The Promise and Problems of Individualization
Individualization is about tailoring decisions to each person instead of relying only on group averages. A benefit is that it can highlight unique skills or circumstances. For example, a student who has taught themselves coding outside of class may not have the same formal background, but individualization could help bring their real ability into focus. Another benefit is trust. If a decision feels like it reflects who I actually am, I am more likely to accept it, even if I do not get the outcome I wanted. At the same time, there are serious downsides. Individualization usually means sharing more personal information. That raises privacy concerns because once the data is collected it can be misused or leaked. There is also the risk of overfitting, where the model becomes too tuned to small details and performs worse in the real world. Finally, individualization can unintentionally favor people who know how to present themselves better or who have resources to provide extra context. Someone who struggles with language, for example, might not benefit as much from a system that asks for more detail, and that creates inequality in a different way.
The Role of Uncertainty When Decisions Carry Real Consequences
Uncertainty is critical because no matter how good a model is, there will always be limits to what it can know. Some uncertainty comes from lack of data, which can sometimes be reduced. Other uncertainty is built into life itself. For example, we cannot predict every factor that affects how a student will perform in college, from personal health to family situations. The authors call this aleatoric uncertainty, which is basically the part that is unknowable. High accuracy might sound convincing, but it averages results across a population. If you are in the small percentage that the system gets wrong, and the decision is life changing, accuracy on average does not protect you. For example, if a system is 95 percent accurate in predicting reoffense in criminal cases, that means 5 percent of people are unfairly treated. If you are in that 5 percent, the harm is huge and possibly permanent. For that reason, I do not think accuracy alone can ever justify high stakes decisions without also looking at uncertainty and the harm that could come from being wrong.
My New Question:
How should systems be designed so that when the stakes are high, people who are exceptions are still protected from harm?
I thought of this because while the case study lays out a strong framework, it left me wondering about the practical steps. For example, systems could be required to show their level of uncertainty in a way decision makers cannot ignore. Humans could also be required to explain in writing when they follow or override an algorithm, adding accountability. Finally, there could be an appeal process that allows people to bring in their own evidence to challenge a decision.
Reflection:
Working on this assignment made me realize how often I rely on averages in daily life. In school we use averages in grades to see how a class is doing overall, and in sports we use them in stats to judge how good teams or players are. The case study’s focus on individualization, uncertainty, and harm which gave me a checklist I can carry into my future work. If I build a project that uses data to make recommendations, I need to ask myself how well it represents the person in front of it, how confident the system really is, and what the consequences are if it is wrong. This shifted my perspective from only thinking about overall performance to also considering how each person might be affected. It also reminded me that ethics in technology is not just about creating better models but about protecting the people who are most vulnerable when the system fails. That is a lesson I want to hold onto both in class and as I move forward in my career.
