Reading Check: BHN
For each of the following soundbites, please write two sentences describing how Barocas, Hardt, and Narayanan (2023) would respond. Please also include the number of the page on which you are basing your sentences in this PDF version of the book. You might find useful discussion of the point on multiple pages; it’s sufficient to list one.
You may need to check both the reading for today and the reading from the previous lecture.
- “Since the COMPAS algorithm didn’t use race as a predictor variable during training, it can’t be racially biased.”
- “For every decision-making task, it is possible to ethically deploy an appropriately trained and audited automated decision model.”
- “If two groups \(a\) and \(b\) are on average equally deserving of access to an opportunity, then the only requirement of fairness is that a decision-making system accepts members of group \(a\) at the same rate as members of group \(b\).”
- “Decision-making algorithms should have equal error rates between different groups.”
- “The data doesn’t lie, so the only fair approach to machine learning is to replicate patterns found in the data as accurately as possible.”
© Phil Chodrow, 2025
References
Barocas, Solon, Moritz Hardt, and Arvind Narayanan. 2023. Fairness and Machine Learning: Limitations and Opportunities. Cambridge, Massachusetts: The MIT Press.