-
Expand entry
-
Added by: Deryn Mair ThomasAbstract:
In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this paper I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision-making, and recommend it as a way to pursue the goods originally associated with explainability.
Narayanan, Arvind. The Limits of the Quantitative Approach to Discrimination2022, James Baldwin Lecture Series-
Expand entry
-
Added by: Tomasz Zyglewicz, Shannon Brick, Michael Greer
Introduction: Let’s set the stage. In 2016, ProPublica released a ground-breaking investigation called Machine Bias. You’ve probably heard of it. They examined a criminal risk prediction tool that’s used across the country. These are tools that claim to predict the likelihood that a defendant will reoffend if released, and they are used to inform bail and parole decisions.Comment (from this Blueprint): This is a written transcript of the James Baldwin lecture, delivered by the computer scientist Arvind Narayanan, at Princeton in 2022. Narayanan's prior research has examined algorithmic bias and standards of fairness with respect to algorithmic decision making. Here, he engages critically with his own discipline, suggesting that there are serious limits to the sorts of quantitative methods that computer scientists recruit to investigate the potential biases in their own tools. Narayanan acknowledges that in voicing this critique, he is echoing claims by feminist researchers from fields beyond computer science. However, his own arguments, centered as they are on the details of the quantitative methods he is at home with, home in on exactly why these prior criticisms hold up in a way that seeks to speak more persuasively to Narayanan's own peers in computer science and other quantitative fields.
Dick, Stephanie. AfterMath: The Work of Proof in the Age of Human–Machine Collaboration2011, Isis, 102(3): 494-505.-
Expand entry
-
Added by: Fenner Stanley TanswellAbstract:
During the 1970s and 1980s, a team of Automated Theorem Proving researchers at the Argonne National Laboratory near Chicago developed the Automated Reasoning Assistant, or AURA, to assist human users in the search for mathematical proofs. The resulting hybrid humans+AURA system developed the capacity to make novel contributions to pure mathematics by very untraditional means. This essay traces how these unconventional contributions were made and made possible through negotiations between the humans and the AURA at Argonne and the transformation in mathematical intuition they produced. At play in these negotiations were experimental practices, nonhumans, and nonmathematical modes of knowing. This story invites an earnest engagement between historians of mathematics and scholars in the history of science and science studies interested in experimental practice, material culture, and the roles of nonhumans in knowledge making.Comment (from this Blueprint): Dick traces the history of the AURA automated reasoning assistant in the 1970s and 80s, arguing that the introduction of the computer system led to novel contributions to mathematics by unconventional means. Dick’s emphasis is on the AURA system as changing the material culture of mathematics, and thereby leading to collaboration and even negotiations between the mathematicians and the computer system.
Series, Peggy, Mark Sprevak. From Intelligent machines to the human brain2014, in M. Massimi (ed.), Philosophy and the Sciences for Everyone. Routledge-
Expand entry
-
Added by: Laura Jimenez
Summary: How does one make a clever adaptive machine that can recognise speech, control an aircraft, and detect credit card fraud? Recent years have seen a revolution in the kinds of tasks computers can do. Underlying these advances is the burgeoning field of machine learning and computational neuroscience. The same methods that allow us to make clever machines also appear to hold the key to understanding ourselves: to explaining how our brain and mind work. This chapter explores this exciting new field and some of the philosophical questions that it raises.Comment: Really good chapter that could serve to introduce scientific ideas behind the mind-computer analogy. The chapter zooms in on the actual functioning of the human mind as a computer able to perform computations. Recommendable for undergraduate students in Philosophy of Mind or Philosophy of science courses.
Shrader-Frechette, Kristin. Reductionist Philosophy of Technology: Stones Thrown from Inside a Glass House1994, Techné: Research in Philosophy and Technology 5(1): 21-28.-
Expand entry
-
Added by: Laura Jimenez
Introduction: Mark Twain said that, for people whose only tool is a hammer, everything looks like a nail. In Thinking about Technology, Joe Pitt's main tools appear to be those of the philosopher of science, so it is not surprising that he claims most problems of philosophy of technology are epistemic problems. As he puts it: 'The strategy here is straightforward. Philosophers of science have examined in detail a number of concepts integral to our understanding of what makes science what it is. The bottom line is this: philosophical questions about technology are first and foremost questions about what we can know about a specific technology and its effects and in what that knowledge consists' . Although Pitt points out important disanalogies between scientific and technological knowledge, nevertheless he emphasizes that philosophy of technology is primarily epistemology. Pitt has stipulatively defined ethical and political analyses of technology as not part of philosophy and philosophy of technology. While claiming to assess the foundations of philosophy of technology, he has adopted a reductionist approach to his subject matter, one that ignores or denigrates the majority of work in philosophy of technology. Does Pitt's bold, reductionist move succeed?Comment: Good as further reading for philosophy of science courses or as introductory reading for courses specialized in philosophy of technology. It is an easy paper but the topic is very specific, so in this last sense it is more suitable for postgraduates.
Sterrett, Susan G.. Turing’s Two Tests For Intelligence2000, Minds and Machines 10(4): 541-559.-
Expand entry
-
Added by: Nick Novelli, Contributed by: Susan G. Sterrett
Abstract: On a literal reading of `Computing Machinery and Intelligence'', Alan Turing presented not one, but two, practical tests to replace the question `Can machines think?'' He presented them as equivalent. I show here that the first test described in that much-discussed paper is in fact not equivalent to the second one, which has since become known as `the Turing Test''. The two tests can yield different results; it is the first, neglected test that provides the more appropriate indication of intelligence. This is because the features of intelligence upon which it relies are resourcefulness and a critical attitude to one''s habitual responses; thus the test''s applicablity is not restricted to any particular species, nor does it presume any particular capacities. This is more appropriate because the question under consideration is what would count as machine intelligence. The first test realizes a possibility that philosophers have overlooked: a test that uses a human''s linguistic performance in setting an empirical test of intelligence, but does not make behavioral similarity to that performance the criterion of intelligence. Consequently, the first test is immune to many of the philosophical criticisms on the basis of which the (so-called) `Turing Test'' has been dismissed.Comment: This paper provides a good analysis of some of the problems with the Turing Test and how they can be avoided. It can be good to use in teaching the classic Turing 1950 paper on the question of whether a computer could be said to 'think' that considers the role of gender in the imitation game version of the test. It could also contribute to an examination of the concept of intelligence, and machine intelligence in particular.
Can’t find it?Contribute the texts you think should be here and we’ll add them soon!
-
-
-
This site is registered on Toolset.com as a development site. -
-
Taylor, Elanor. Explanation and The Right to Explanation
2023, Journal of the American Philosophical Association 1:1-16
Comment: This paper offers a clear overview of the literature on the right to explanation and counters the mainstream view that, in the context of automated decision-making technology, that we hold such a right. It would therefore offer a useful introduction to ideas about explanability in relation to the ethics of AI and automated technologies, and could be used in a reading group context as well as in upper undergraduate and graduate level courses.