Full text Read free
Taylor, Elanor. Explanation and The Right to Explanation
2023, Journal of the American Philosophical Association 1:1-16
Expand entry
Added by: Deryn Mair Thomas
Abstract:

In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this paper I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision-making, and recommend it as a way to pursue the goods originally associated with explainability.

Comment: This paper offers a clear overview of the literature on the right to explanation and counters the mainstream view that, in the context of automated decision-making technology, that we hold such a right. It would therefore offer a useful introduction to ideas about explanability in relation to the ethics of AI and automated technologies, and could be used in a reading group context as well as in upper undergraduate and graduate level courses.

Export citation in BibTeX format
Export text citation
View this text on PhilPapers
Export citation in Reference Manager format
Export citation in EndNote format
Export citation in Zotero format
Share on Facebook Share on LinkedIn Share by Email
Full text
Vredenburgh, Kate. Freedom at Work: Understanding, Alienation, and the AI-Driven Workplace
2022, Canadian Journal of Philosophy 52 (1):78-92.
Expand entry
Added by: Deryn Mair Thomas
Abstract:

This paper explores a neglected normative dimension of algorithmic opacity in the workplace and the labor market. It argues that explanations of algorithms and algorithmic decisions are of noninstrumental value. That is because explanations of the structure and function of parts of the social world form the basis for reflective clarification of our practical orientation toward the institutions that play a central role in our life. Using this account of the noninstrumental value of explanations, the paper diagnoses distinctive normative defects in the workplace and economic institutions which a reliance on AI can encourage, and which lead to alienation.

Comment: This paper offers a novel approach to the exploration of alienation at work (i.e., what makes work bad) from an algorithmic ethics perspective. It relies on the noninstrumental value of explanation to make its central argument, and grounds this value in the role that explanation plays in our ability to form a practical orientation towards our scoial world. In this sense, it examines an interesting, and somewhat underexplored, connection between algorithmic ethics, justice, the future of work, and social capabilities. As such, it could be useful in a wide range of course contexts. This being said, the central argument is fairly complex, and relies on some previous understanding of analytic political philosophy and philosophy of AI. It also employs technical language from these domains, and therefore would be best utilised for masters-level or other advanced philosophical courses and study.

Export citation in BibTeX format
Export text citation
View this text on PhilPapers
Export citation in Reference Manager format
Export citation in EndNote format
Export citation in Zotero format
Share on Facebook Share on LinkedIn Share by Email
Can’t find it?
Contribute the texts you think should be here and we’ll add them soon!