Narayanan, Arvind. The Limits of the Quantitative Approach to Discrimination
2022, James Baldwin Lecture Series
-
Expand entry
-
Added by: Tomasz Zyglewicz, Shannon Brick, Michael GreerIntroduction: Let’s set the stage. In 2016, ProPublica released a ground-breaking investigation called Machine Bias. You’ve probably heard of it. They examined a criminal risk prediction tool that’s used across the country. These are tools that claim to predict the likelihood that a defendant will reoffend if released, and they are used to inform bail and parole decisions.Comment (from this Blueprint): This is a written transcript of the James Baldwin lecture, delivered by the computer scientist Arvind Narayanan, at Princeton in 2022. Narayanan's prior research has examined algorithmic bias and standards of fairness with respect to algorithmic decision making. Here, he engages critically with his own discipline, suggesting that there are serious limits to the sorts of quantitative methods that computer scientists recruit to investigate the potential biases in their own tools. Narayanan acknowledges that in voicing this critique, he is echoing claims by feminist researchers from fields beyond computer science. However, his own arguments, centered as they are on the details of the quantitative methods he is at home with, home in on exactly why these prior criticisms hold up in a way that seeks to speak more persuasively to Narayanan's own peers in computer science and other quantitative fields.Vredenburgh, Kate. The Right to Explanation2021, Journal of Political Philosophy 30 (2):209-229
-
Expand entry
-
Added by: Deryn Mair ThomasAbstract:
This article argues for a right to explanation, on the basis of its necessity to protect the interest in what I call informed self- advocacy from the serious threat of opacity. The argument for the right to explanation proceeds along the lines set out by an interest- based account of rights (Section II). Section III presents and motivates the moral importance of informed self- advocacy in hierarchical, non- voluntary institutions. Section IV argues for a right to so- called rule- based normative and causal explanations, on the basis of their necessity to protect that interest. Section V argues that this protection comes at a tolerable cost.
Comment: This paper asserts a right to explanation grounded in an interest in informed self-advocacy, the term the author uses to describe a cluster of abilities to represent one's interests and values to decision-makers and to further those interests and values within an institution. Vredenburgh also argues that such form of self-advocacy are necessary for hierarchical, non-voluntary institutions to be legitimate and fair - and it is on these grounds that a person may reasonably reject insitutional set-ups that prevent them from engaging in these abilities. In this sense, Vredenburgh's argument applies to a broader set of problems then simply algorithmic opacity - they may feasibly be applied to cases in which systems (such as bureacratic ones) deny an individual this right to explanation. Therefore, this paper presents an argument which would be useful as further or specialised reading in a variety of classroom contexts, including courses or reading groups addressing technological and algorithmic ethics, basic political rights, bureacratic ethics, as well as more general social and political philosophical courses. It might be interesting, for example, to use it to in an introductory social/political course to discuss with students some of the ethical questions that are particular to a 21st century context. As systems become more complex and individuals become further removed from the institutional decision-making that guides/rules/directs their lives, what right do we have to understand the processes that condition our experience? In what other situations might these rights become challenged?Can’t find it?Contribute the texts you think should be here and we’ll add them soon!
-