Full textRead freeBlue print
Dick, Stephanie. AfterMath: The Work of Proof in the Age of Human–Machine Collaboration
2011, Isis, 102(3): 494-505.
Expand entry
Added by: Fenner Stanley Tanswell
Abstract: During the 1970s and 1980s, a team of Automated Theorem Proving researchers at the Argonne National Laboratory near Chicago developed the Automated Reasoning Assistant, or AURA, to assist human users in the search for mathematical proofs. The resulting hybrid humans+AURA system developed the capacity to make novel contributions to pure mathematics by very untraditional means. This essay traces how these unconventional contributions were made and made possible through negotiations between the humans and the AURA at Argonne and the transformation in mathematical intuition they produced. At play in these negotiations were experimental practices, nonhumans, and nonmathematical modes of knowing. This story invites an earnest engagement between historians of mathematics and scholars in the history of science and science studies interested in experimental practice, material culture, and the roles of nonhumans in knowledge making.
Comment (from this Blueprint): Dick traces the history of the AURA automated reasoning assistant in the 1970s and 80s, arguing that the introduction of the computer system led to novel contributions to mathematics by unconventional means. Dick’s emphasis is on the AURA system as changing the material culture of mathematics, and thereby leading to collaboration and even negotiations between the mathematicians and the computer system.
Full textRead free
Taylor, Elanor. Explanation and The Right to Explanation
2023, Journal of the American Philosophical Association 1:1-16
Expand entry
Added by: Deryn Mair Thomas
Abstract:

In response to widespread use of automated decision-making technology, some have considered a right to explanation. In this paper I draw on insights from philosophical work on explanation to present a series of challenges to this idea, showing that the normative motivations for access to such explanations ask for something difficult, if not impossible, to extract from automated systems. I consider an alternative, outcomes-focused approach to the normative evaluation of automated decision-making, and recommend it as a way to pursue the goods originally associated with explainability.

Comment: This paper offers a clear overview of the literature on the right to explanation and counters the mainstream view that, in the context of automated decision-making technology, that we hold such a right. It would therefore offer a useful introduction to ideas about explanability in relation to the ethics of AI and automated technologies, and could be used in a reading group context as well as in upper undergraduate and graduate level courses.
Can’t find it?
Contribute the texts you think should be here and we’ll add them soon!