Topic: Social Philosophy -> Technology and Material Culture
FiltersNEW

Hold ctrl / ⌘ to select more or unselect / Info

Topics

Languages

Traditions

Times (use negative numbers for BCE)

-

Medium:

Recommended use:

Difficulty:


Full text
Atencia-Linares, Paloma. Deepfakes, Shallow Epistemic Graves: On the Epistemic Robustness of Photography and Videos in the Era of Deepfakes
2022, Synthese, 518
Expand entry
Abstract:

The recent proliferation of deepfakes, AI and other digitally produced deceptive representations has revived the debate on the epistemic robustness of photography and other mechanically produced images. Authors such as Rini (2020) and Fallis (2021) claim that the proliferation of deepfakes pose a serious threat to the reliability and the epistemic value of photographs and videos. In particular, Fallis adopts a Skyrmsian account of how signals carry information (Skyrms, 2010) to argue that the existence of deepfakes significantly reduces the information that images carry about the world, which undermines their reliability as a source of evidence. In this paper, we focus on Fallis’ version of the challenge, but our results can be generalized to address similar pessimistic views such as Rini’s. More generally, we offer an account of the epistemic robustness of photography and videos that allows us to understand these systems of representation as continuous with other means of information transmission we find in nature. This account will then give us the necessary tools to put Fallis’ claims into perspective: using a richer approach to animal signaling based on the signaling model of communication (Maynard-Smith and Harper, 2003), we will claim that, while it might be true that deepfake technology increases the probability of obtaining false positives, the dimension of the epistemic threat involved might still be negligible.

Comment: It'd be a good reading for a class that touches on the epistemic challenges of AI and digital images. What is fun and interesting about it is that (i) it gives a more positive view on the issue: it argues that the epistemic threat of AI and manipulated images is not as terrible as many suggest and (ii) in order to argue for this position it relies on the animal world.
See used
Bortolotti, Lisa, John Harris. Disability, Enhancement and the Harm-Benefit Continuum
2006, In John R. Spencer & Antje Du Bois-Pedain (eds.), Freedom and Responsibility in Reproductive Choice. Hart Publishers
Expand entry
Added by: Simon Fokt, Contributed by: Nils-Hennes Stear
Abstract:

Suppose that you are soon to be a parent and you learn that there are some simple measures that you can take to make sure that your child will be healthy. In particular, suppose that by following the doctor’s advice, you can prevent your child from having a disability, you can make your child immune from a number of dangerous diseases and you can even enhance its future intelligence. All that is required for this to happen is that you (or your partner) comply with lifestyle and dietary requirements. Do you and your partner have any moral reasons (or moral obligations) to follow the doctor’s advice? Would it make a difference if, instead of following some simple dietary requirements, you consented to genetic engineering to make sure that your child was free from disabilities, healthy and with above average intelligence? In this paper we develop a framework for dealing with these questions and we suggest some directions the answers might take.

Comment: This is a paper that gives an account of enhancement and disability in terms of one's relative position on a harmed and benefitted continuum, and defends enhancement on completely general moral grounds according to which the pro tanto duty to enhance is the same as the pro tanto duty not to disable. It pairs well with criticisms of the 'new eugenics', such as Robert Sparrow's 'A Not-So-New Eugenics' (2011) and more generally with consequentialist or specifically harm-based accounts of moral obligation.
Full textRead freeBlue print
Dick, Stephanie. AfterMath: The Work of Proof in the Age of Human–Machine Collaboration
2011, Isis, 102(3): 494-505.

Expand entry

Added by: Fenner Stanley Tanswell
Abstract:
During the 1970s and 1980s, a team of Automated Theorem Proving researchers at the Argonne National Laboratory near Chicago developed the Automated Reasoning Assistant, or AURA, to assist human users in the search for mathematical proofs. The resulting hybrid humans+AURA system developed the capacity to make novel contributions to pure mathematics by very untraditional means. This essay traces how these unconventional contributions were made and made possible through negotiations between the humans and the AURA at Argonne and the transformation in mathematical intuition they produced. At play in these negotiations were experimental practices, nonhumans, and nonmathematical modes of knowing. This story invites an earnest engagement between historians of mathematics and scholars in the history of science and science studies interested in experimental practice, material culture, and the roles of nonhumans in knowledge making.
Comment (from this Blueprint): Dick traces the history of the AURA automated reasoning assistant in the 1970s and 80s, arguing that the introduction of the computer system led to novel contributions to mathematics by unconventional means. Dick’s emphasis is on the AURA system as changing the material culture of mathematics, and thereby leading to collaboration and even negotiations between the mathematicians and the computer system.
Full textRead freeBlue print
Martin, Ursula, Pease, Alison. Mathematical Practice, Crowdsourcing, and Social Machines
2013, in Intelligent Computer Mathematics. CICM 2013. Lecture Notes in Computer Sciences, Carette, J. et al. (eds.). Springer.

Expand entry

Added by: Fenner Stanley Tanswell
Abstract:
The highest level of mathematics has traditionally been seen as a solitary endeavour, to produce a proof for review and acceptance by research peers. Mathematics is now at a remarkable inflexion point, with new technology radically extending the power and limits of individuals. Crowdsourcing pulls together diverse experts to solve problems; symbolic computation tackles huge routine calculations; and computers check proofs too long and complicated for humans to comprehend. The Study of Mathematical Practice is an emerging interdisciplinary field which draws on philosophy and social science to understand how mathematics is produced. Online mathematical activity provides a novel and rich source of data for empirical investigation of mathematical practice - for example the community question-answering system mathoverflow contains around 40,000 mathematical conversations, and polymath collaborations provide transcripts of the process of discovering proofs. Our preliminary investigations have demonstrated the importance of “soft” aspects such as analogy and creativity, alongside deduction and proof, in the production of mathematics, and have given us new ways to think about the roles of people and machines in creating new mathematical knowledge. We discuss further investigation of these resources and what it might reveal. Crowdsourced mathematical activity is an example of a “social machine”, a new paradigm, identified by Berners-Lee, for viewing a combination of people and computers as a single problem-solving entity, and the subject of major international research endeavours. We outline a future research agenda for mathematics social machines, a combination of people, computers, and mathematical archives to create and apply mathematics, with the potential to change the way people do mathematics, and to transform the reach, pace, and impact of mathematics research.
Comment (from this Blueprint): In this paper, Martin and Pease look at how mathematics happens online, emphasising how this embodies the picture of mathematics given by Polya and Lakatos, two central figures in philosophy of mathematical practice. They look at multiple venues of online mathematics, including the polymath projects of collaborative problem-solving, and mathoverflow, which is a question-and-answer forum. By looking at the discussions that take place when people are doing maths online, they argue that you can get rich new kinds of data about the processes of mathematical discovery and understanding. They discuss how online mathematics can become a “social machine”, and how this can open up new ways of doing mathematics.
Read free
Mitchell, Melanie. Why AI is Harder Than We Think
2023, in Mind Design III, John Haugeland, Carl Craver, and Colin Klein (eds). The MIT Press
Expand entry
Added by: Alnica Visser
Abstract:

Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.

Comment: Short easy read. Pairs well with Turing, giving a good summary of the technological progress that has been made since the 50s along with a more pessimistic interpretation of the theoretical import of the progress.
Read freeBlue print
Narayanan, Arvind. The Limits of the Quantitative Approach to Discrimination
2022, James Baldwin Lecture Series

Expand entry

Added by: Tomasz Zyglewicz, Shannon Brick, Michael Greer
Introduction: Let’s set the stage. In 2016, ProPublica released a ground-breaking investigation called Machine Bias. You’ve probably heard of it. They examined a criminal risk prediction tool that’s used across the country. These are tools that claim to predict the likelihood that a defendant will reoffend if released, and they are used to inform bail and parole decisions.
Comment (from this Blueprint): This is a written transcript of the James Baldwin lecture, delivered by the computer scientist Arvind Narayanan, at Princeton in 2022. Narayanan's prior research has examined algorithmic bias and standards of fairness with respect to algorithmic decision making. Here, he engages critically with his own discipline, suggesting that there are serious limits to the sorts of quantitative methods that computer scientists recruit to investigate the potential biases in their own tools. Narayanan acknowledges that in voicing this critique, he is echoing claims by feminist researchers from fields beyond computer science. However, his own arguments, centered as they are on the details of the quantitative methods he is at home with, home in on exactly why these prior criticisms hold up in a way that seeks to speak more persuasively to Narayanan's own peers in computer science and other quantitative fields.
Full textRead freeBlue print
Radin, Joanna. Digital Natives’: How Medical and Indigenous Histories Matter for Big Data
2017, Data Histories, 32 (1): 43-64

Expand entry

Added by: Tomasz Zyglewicz, Shannon Brick, Michael Greer
Abstract:
This case considers the politics of reuse in the realm of “Big Data.” It focuses on the history of a particular collection of data, extracted and digitized from patient records made in the course of a longitudinal epidemiological study involving Indigenous members of the Gila River Indian Community Reservation in the American Southwest. The creation and circulation of the Pima Indian Diabetes Dataset (PIDD) demonstrates the value of medical and Indigenous histories to the study of Big Data. By adapting the concept of the “digital native” itself for reuse, I argue that the history of the PIDD reveals how data becomes alienated from persons even as it reproduces complex social realities of the circumstances of its origin. In doing so, this history highlights otherwise obscured matters of ethics and politics that are relevant to communities who identify as Indigenous as well as those who do not.
Comment (from this Blueprint): In this 2017 paper, historian Joanna Radin explores how reusing big data can contribute to the continued subjugation of Akimel O’odham, who live in the southewestern region of the US, otherwise known as the "Pima". This reading also illustrates how data can, over time, become used for what it was never intended or collected for. Radin emphasizes the dangers of forgetting that data represent human beings.
Full textRead free
Seavilleklein, Victoria. Challenging the Rhetoric of Choice in Prenatal Screening
2009, Bioethics 23(1): 68-77.

Expand entry

Added by: Simon Fokt
Abstract: Prenatal screening, consisting of maternal serum screening and nuchal translucency screening, is on the verge of expansion, both by being offered to more pregnant women and by screening for more conditions. The Society of Obstetricians and Gynaecologists of Canada and the American College of Obstetricians and Gynecologists have each recently recommended that screening be extended to all pregnant women regardless of age, disease history, or risk status. This screening is commonly justified by appeal to the value of autonomy, or women's choice. In this paper, I critically examine the value of autonomy in the context of prenatal screening to determine whether it justifies the routine offer of screening and the expansion of screening services. I argue that in the vast majority of cases the option of prenatal screening does not promote or protect women's autonomy. Both a narrow conception of choice as informed consent and a broad conception of choice as relational reveal difficulties in achieving adequate standards of free informed choice. While there are reasons to worry that women's autonomy is not being protected or promoted within the limited scope of current practice, we should hesitate before normalizing it as part of standard prenatal care for all.
Comment: The text introduces the notion of relational autonomy and argues that an increase in pre-natal screening can in fact act so as to restrict the autonomy of pregnant women. It is best used in teaching applied ethics modules on procreation and autonomy, and as a further reading offering a critique of approaches which do not take into account contextual features of particular situations in their moral assessment.
Full textRead free
Sherman, Nancy. From Nuremberg to Guantánamo: Medical Ethics Then and Now
2007, Washington University Global Studies Law Review 6(3): 609-619.

Expand entry

Added by: John Baldari
Abstract: On October 25, 1946, three weeks after the International Military Tribunal at Nuremberg entered its verdicts, the United States established Military Tribunal I for the trial of twenty-three Nazi physicians. The charges, delivered by Brigadier General Telford Taylor on December 9, 1946, form a seminal chapter in the history of medical ethics and, specifically, medical ethics in war. The list of noxious experiments conducted on civilians and prisons of war, and condemned by the Tribunal as war crimes and as crimes against humanity, is by now more or less familiar. That list included: high-altitude experiments; freezing experiments; malaria experiments; sulfanilamide experiments; bone, muscle, and nerve regeneration and bone transplantation experiments; sea water experiments; jaundice and spotted fever experiments; sterilization experiments; experiments with poison and with incendiary bombs. What remains less familiar is the moral mindset of doctors and health care workers who plied their medical skill for morally questionable uses in war. In his 1981 work, The Nazi Doctors, Robert Jay Lifton took up that question, interviewing doctors, many of whom for forty years continued to distance themselves psychologically from their deeds. The questions about moral distancing Lifton raised (though not the questions about criminal experiments) have immediate urgency for us now. Military medical doctors, psychiatrists and psychologists serve in U.S. military prisons in Guantánamo, Abu Ghraib, Kandahar, and, until very recently, in undisclosed CIA operated facilities around the world where medical ethics are again at issue. Moreover, they serve in top positions in the Pentagon, as civilian and military heads of command, who pass orders and regulations to military doctors in the field, and who are in charge of the health of enemy combatants, as well as U.S. soldiers. Because we recently marked the sixtieth anniversary of the judgment at Nuremberg, I want to awaken our collective memory to the ways in which doctors in war, even in a war very different from the one the Nazis fought, can insulate themselves from their moral and professional consciences.
Comment: This text is best used as an additional reading in bioethics, or in just war theory (post ad bellum).
Full textRead free
Shrader-Frechette, Kristin. Reductionist Philosophy of Technology: Stones Thrown from Inside a Glass House
1994, Techné: Research in Philosophy and Technology 5(1): 21-28.

Expand entry

Added by: Laura Jimenez
Introduction: Mark Twain said that, for people whose only tool is a hammer, everything looks like a nail. In Thinking about Technology, Joe Pitt's main tools appear to be those of the philosopher of science, so it is not surprising that he claims most problems of philosophy of technology are epistemic problems. As he puts it: 'The strategy here is straightforward. Philosophers of science have examined in detail a number of concepts integral to our understanding of what makes science what it is. The bottom line is this: philosophical questions about technology are first and foremost questions about what we can know about a specific technology and its effects and in what that knowledge consists' . Although Pitt points out important disanalogies between scientific and technological knowledge, nevertheless he emphasizes that philosophy of technology is primarily epistemology. Pitt has stipulatively defined ethical and political analyses of technology as not part of philosophy and philosophy of technology. While claiming to assess the foundations of philosophy of technology, he has adopted a reductionist approach to his subject matter, one that ignores or denigrates the majority of work in philosophy of technology. Does Pitt's bold, reductionist move succeed?
Comment: Good as further reading for philosophy of science courses or as introductory reading for courses specialized in philosophy of technology. It is an easy paper but the topic is very specific, so in this last sense it is more suitable for postgraduates.
Can’t find it?
Contribute the texts you think should be here and we’ll add them soon!