-
Expand entry
-
Added by: Deryn Mair ThomasAbstract:
In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this paper I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision-making, and recommend it as a way to pursue the goods originally associated with explainability.
Vredenburgh, Kate. The Right to Explanation2021, Journal of Political Philosophy 30 (2):209-229-
Expand entry
-
Added by: Deryn Mair ThomasAbstract:
This article argues for a right to explanation, on the basis of its necessity to protect the interest in what I call informed self- advocacy from the serious threat of opacity. The argument for the right to explanation proceeds along the lines set out by an interest- based account of rights (Section II). Section III presents and motivates the moral importance of informed self- advocacy in hierarchical, non- voluntary institutions. Section IV argues for a right to so- called rule- based normative and causal explanations, on the basis of their necessity to protect that interest. Section V argues that this protection comes at a tolerable cost.
Comment: This paper asserts a right to explanation grounded in an interest in informed self-advocacy, the term the author uses to describe a cluster of abilities to represent one's interests and values to decision-makers and to further those interests and values within an institution. Vredenburgh also argues that such form of self-advocacy are necessary for hierarchical, non-voluntary institutions to be legitimate and fair - and it is on these grounds that a person may reasonably reject insitutional set-ups that prevent them from engaging in these abilities. In this sense, Vredenburgh's argument applies to a broader set of problems then simply algorithmic opacity - they may feasibly be applied to cases in which systems (such as bureacratic ones) deny an individual this right to explanation. Therefore, this paper presents an argument which would be useful as further or specialised reading in a variety of classroom contexts, including courses or reading groups addressing technological and algorithmic ethics, basic political rights, bureacratic ethics, as well as more general social and political philosophical courses. It might be interesting, for example, to use it to in an introductory social/political course to discuss with students some of the ethical questions that are particular to a 21st century context. As systems become more complex and individuals become further removed from the institutional decision-making that guides/rules/directs their lives, what right do we have to understand the processes that condition our experience? In what other situations might these rights become challenged?
Vredenburgh, Kate. Freedom at Work: Understanding, Alienation, and the AI-Driven Workplace2022, Canadian Journal of Philosophy 52 (1):78-92.-
Expand entry
-
Added by: Deryn Mair ThomasAbstract:
This paper explores a neglected normative dimension of algorithmic opacity in the workplace and the labor market. It argues that explanations of algorithms and algorithmic decisions are of noninstrumental value. That is because explanations of the structure and function of parts of the social world form the basis for reflective clarification of our practical orientation toward the institutions that play a central role in our life. Using this account of the noninstrumental value of explanations, the paper diagnoses distinctive normative defects in the workplace and economic institutions which a reliance on AI can encourage, and which lead to alienation.
Comment: This paper offers a novel approach to the exploration of alienation at work (i.e., what makes work bad) from an algorithmic ethics perspective. It relies on the noninstrumental value of explanation to make its central argument, and grounds this value in the role that explanation plays in our ability to form a practical orientation towards our scoial world. In this sense, it examines an interesting, and somewhat underexplored, connection between algorithmic ethics, justice, the future of work, and social capabilities. As such, it could be useful in a wide range of course contexts. This being said, the central argument is fairly complex, and relies on some previous understanding of analytic political philosophy and philosophy of AI. It also employs technical language from these domains, and therefore would be best utilised for masters-level or other advanced philosophical courses and study.
Radin, Joanna. Digital Natives’: How Medical and Indigenous Histories Matter for Big Data2017, Data Histories, 32 (1): 43-64-
Expand entry
-
Added by: Tomasz Zyglewicz, Shannon Brick, Michael GreerAbstract:
This case considers the politics of reuse in the realm of “Big Data.” It focuses on the history of a particular collection of data, extracted and digitized from patient records made in the course of a longitudinal epidemiological study involving Indigenous members of the Gila River Indian Community Reservation in the American Southwest. The creation and circulation of the Pima Indian Diabetes Dataset (PIDD) demonstrates the value of medical and Indigenous histories to the study of Big Data. By adapting the concept of the “digital native” itself for reuse, I argue that the history of the PIDD reveals how data becomes alienated from persons even as it reproduces complex social realities of the circumstances of its origin. In doing so, this history highlights otherwise obscured matters of ethics and politics that are relevant to communities who identify as Indigenous as well as those who do not.Comment (from this Blueprint): In this 2017 paper, historian Joanna Radin explores how reusing big data can contribute to the continued subjugation of Akimel O’odham, who live in the southewestern region of the US, otherwise known as the "Pima". This reading also illustrates how data can, over time, become used for what it was never intended or collected for. Radin emphasizes the dangers of forgetting that data represent human beings.
Narayanan, Arvind. The Limits of the Quantitative Approach to Discrimination2022, James Baldwin Lecture Series-
Expand entry
-
Added by: Tomasz Zyglewicz, Shannon Brick, Michael Greer
Introduction: Let’s set the stage. In 2016, ProPublica released a ground-breaking investigation called Machine Bias. You’ve probably heard of it. They examined a criminal risk prediction tool that’s used across the country. These are tools that claim to predict the likelihood that a defendant will reoffend if released, and they are used to inform bail and parole decisions.Comment (from this Blueprint): This is a written transcript of the James Baldwin lecture, delivered by the computer scientist Arvind Narayanan, at Princeton in 2022. Narayanan's prior research has examined algorithmic bias and standards of fairness with respect to algorithmic decision making. Here, he engages critically with his own discipline, suggesting that there are serious limits to the sorts of quantitative methods that computer scientists recruit to investigate the potential biases in their own tools. Narayanan acknowledges that in voicing this critique, he is echoing claims by feminist researchers from fields beyond computer science. However, his own arguments, centered as they are on the details of the quantitative methods he is at home with, home in on exactly why these prior criticisms hold up in a way that seeks to speak more persuasively to Narayanan's own peers in computer science and other quantitative fields.
Martin, Ursula, Pease, Alison. Mathematical Practice, Crowdsourcing, and Social Machines2013, in Intelligent Computer Mathematics. CICM 2013. Lecture Notes in Computer Sciences, Carette, J. et al. (eds.). Springer.-
Expand entry
-
Added by: Fenner Stanley TanswellAbstract:
The highest level of mathematics has traditionally been seen as a solitary endeavour, to produce a proof for review and acceptance by research peers. Mathematics is now at a remarkable inflexion point, with new technology radically extending the power and limits of individuals. Crowdsourcing pulls together diverse experts to solve problems; symbolic computation tackles huge routine calculations; and computers check proofs too long and complicated for humans to comprehend. The Study of Mathematical Practice is an emerging interdisciplinary field which draws on philosophy and social science to understand how mathematics is produced. Online mathematical activity provides a novel and rich source of data for empirical investigation of mathematical practice - for example the community question-answering system mathoverflow contains around 40,000 mathematical conversations, and polymath collaborations provide transcripts of the process of discovering proofs. Our preliminary investigations have demonstrated the importance of “soft” aspects such as analogy and creativity, alongside deduction and proof, in the production of mathematics, and have given us new ways to think about the roles of people and machines in creating new mathematical knowledge. We discuss further investigation of these resources and what it might reveal. Crowdsourced mathematical activity is an example of a “social machine”, a new paradigm, identified by Berners-Lee, for viewing a combination of people and computers as a single problem-solving entity, and the subject of major international research endeavours. We outline a future research agenda for mathematics social machines, a combination of people, computers, and mathematical archives to create and apply mathematics, with the potential to change the way people do mathematics, and to transform the reach, pace, and impact of mathematics research.Comment (from this Blueprint): In this paper, Martin and Pease look at how mathematics happens online, emphasising how this embodies the picture of mathematics given by Polya and Lakatos, two central figures in philosophy of mathematical practice. They look at multiple venues of online mathematics, including the polymath projects of collaborative problem-solving, and mathoverflow, which is a question-and-answer forum. By looking at the discussions that take place when people are doing maths online, they argue that you can get rich new kinds of data about the processes of mathematical discovery and understanding. They discuss how online mathematics can become a “social machine”, and how this can open up new ways of doing mathematics.
Dick, Stephanie. AfterMath: The Work of Proof in the Age of Human–Machine Collaboration2011, Isis, 102(3): 494-505.-
Expand entry
-
Added by: Fenner Stanley TanswellAbstract:
During the 1970s and 1980s, a team of Automated Theorem Proving researchers at the Argonne National Laboratory near Chicago developed the Automated Reasoning Assistant, or AURA, to assist human users in the search for mathematical proofs. The resulting hybrid humans+AURA system developed the capacity to make novel contributions to pure mathematics by very untraditional means. This essay traces how these unconventional contributions were made and made possible through negotiations between the humans and the AURA at Argonne and the transformation in mathematical intuition they produced. At play in these negotiations were experimental practices, nonhumans, and nonmathematical modes of knowing. This story invites an earnest engagement between historians of mathematics and scholars in the history of science and science studies interested in experimental practice, material culture, and the roles of nonhumans in knowledge making.Comment (from this Blueprint): Dick traces the history of the AURA automated reasoning assistant in the 1970s and 80s, arguing that the introduction of the computer system led to novel contributions to mathematics by unconventional means. Dick’s emphasis is on the AURA system as changing the material culture of mathematics, and thereby leading to collaboration and even negotiations between the mathematicians and the computer system.
Bortolotti, Lisa, John Harris. Disability, Enhancement and the Harm-Benefit Continuum2006, In John R. Spencer & Antje Du Bois-Pedain (eds.), Freedom and Responsibility in Reproductive Choice. Hart Publishers-
Expand entry
-
Added by: Simon Fokt, Contributed by: Nils-Hennes StearAbstract:
Suppose that you are soon to be a parent and you learn that there are some simple measures that you can take to make sure that your child will be healthy. In particular, suppose that by following the doctor’s advice, you can prevent your child from having a disability, you can make your child immune from a number of dangerous diseases and you can even enhance its future intelligence. All that is required for this to happen is that you (or your partner) comply with lifestyle and dietary requirements. Do you and your partner have any moral reasons (or moral obligations) to follow the doctor’s advice? Would it make a difference if, instead of following some simple dietary requirements, you consented to genetic engineering to make sure that your child was free from disabilities, healthy and with above average intelligence? In this paper we develop a framework for dealing with these questions and we suggest some directions the answers might take.
Comment: This is a paper that gives an account of enhancement and disability in terms of one's relative position on a harmed and benefitted continuum, and defends enhancement on completely general moral grounds according to which the pro tanto duty to enhance is the same as the pro tanto duty not to disable. It pairs well with criticisms of the 'new eugenics', such as Robert Sparrow's 'A Not-So-New Eugenics' (2011) and more generally with consequentialist or specifically harm-based accounts of moral obligation.
Dawn M Wilson. Facing the Camera: Self-portraits of Photographers as Artists2012, The Journal of Aesthetics and Art Criticism 70(1): 56-66.-
Expand entry
-
Added by: Hans Maes
Introduction: Self-portrait photography presents an elucidatory range of cases for investigating the relationship between automatism and artistic agency in photography - a relationship that is seen as a problem in the philosophy of art. I discuss self-portraits by photographers who examine and portray their own identities as artists working in the medium of photography. I argue that the automatism inherent in the production of a photograph has made it possible for artists to extend the tradition of self-portraiture in a way that is radically different from previous visual arts.In Section I, I explain why self-portraiture offers a way to address the apparent conflict between automatism and agency that is debated in the philosophy of art. In Section II, I explain why mirrors play an important function in the production of a traditional self-portrait. In Sections III and IV, I discuss how photographers may create self-portraits with and without the use of mirrors to show how photography offers unique and important new forms of self-portraiture.Comment: Argues that the automatism inherent in the production of a photograph has made it possible for artists to extend the tradition of self-portraiture in a way that is radically different from previous visual arts. Demonstrates that automatism need not stand in competition or conflict with artistic agency.
Artworks to use with this text:
Ilse Bing, Self-portrait with Leica (1931)
It is usual for portraits to show a person's head either in profile or in a frontal position, but this self-portrait shows both alternatives simultaneously. It also depicts the presence of two mirrors in such a way that we are in a position to judge that the camera has recorded its own reflection. Thus, we see both the face of the artist and the "face" of the camera: it is a double self-portrait. Argues that the automatism inherent in the production of a photograph has made it possible for artists to extend the tradition of self-portraiture in a way that is radically different from previous visual arts. Demonstrates that automatism need not stand in competition or conflict with artistic agency.
Artworks to use with this text:
Ilse Bing, Self-portrait with Leica (1931)
It is usual for portraits to show a person's head either in profile or in a frontal position, but this self-portrait shows both alternatives simultaneously. It also depicts the presence of two mirrors in such a way that we are in a position to judge that the camera has recorded its own reflection. Thus, we see both the face of the artist and the "face" of the camera: it is a double self-portrait.
Shrader-Frechette, Kristin. Reductionist Philosophy of Technology: Stones Thrown from Inside a Glass House1994, Techné: Research in Philosophy and Technology 5(1): 21-28.-
Expand entry
-
Added by: Laura Jimenez
Introduction: Mark Twain said that, for people whose only tool is a hammer, everything looks like a nail. In Thinking about Technology, Joe Pitt's main tools appear to be those of the philosopher of science, so it is not surprising that he claims most problems of philosophy of technology are epistemic problems. As he puts it: 'The strategy here is straightforward. Philosophers of science have examined in detail a number of concepts integral to our understanding of what makes science what it is. The bottom line is this: philosophical questions about technology are first and foremost questions about what we can know about a specific technology and its effects and in what that knowledge consists' . Although Pitt points out important disanalogies between scientific and technological knowledge, nevertheless he emphasizes that philosophy of technology is primarily epistemology. Pitt has stipulatively defined ethical and political analyses of technology as not part of philosophy and philosophy of technology. While claiming to assess the foundations of philosophy of technology, he has adopted a reductionist approach to his subject matter, one that ignores or denigrates the majority of work in philosophy of technology. Does Pitt's bold, reductionist move succeed?Comment: Good as further reading for philosophy of science courses or as introductory reading for courses specialized in philosophy of technology. It is an easy paper but the topic is very specific, so in this last sense it is more suitable for postgraduates.
- 1
- 2
Can’t find it?Contribute the texts you think should be here and we’ll add them soon!
-
-
-
This site is registered on Toolset.com as a development site. -
-
-
-
-
-
Taylor, Elanor. Explanation and The Right to Explanation
2023, Journal of the American Philosophical Association 1:1-16
Comment: This paper offers a clear overview of the literature on the right to explanation and counters the mainstream view that, in the context of automated decision-making technology, that we hold such a right. It would therefore offer a useful introduction to ideas about explanability in relation to the ethics of AI and automated technologies, and could be used in a reading group context as well as in upper undergraduate and graduate level courses.