-
Expand entry
-
Added by: Fenner Stanley TanswellAbstract: Over a period of more than 30 years, more than 100 mathematicians worked on a project to classify mathematical objects known as finite simple groups. The Classification, when officially declared completed in 1981, ranged between 300 and 500 articles and ran somewhere between 5,000 and 10,000 journal pages. Mathematicians have hailed the project as one of the greatest mathematical achievements of the 20th century, and it surpasses, both in scale and scope, any other mathematical proof of the 20th century. The history of the Classification points to the importance of face-to-face interaction and close teaching relationships in the production and transformation of theoretical knowledge. The techniques and methods that governed much of the work in finite simple group theory circulated via personal, often informal, communication, rather than in published proofs. Consequently, the printed proofs that would constitute the Classification Theorem functioned as a sort of shorthand for and formalization of proofs that had already been established during personal interactions among mathematicians. The proof of the Classification was at once both a material artifact and a crystallization of one community’s shared practices, values, histories, and expertise. However, beginning in the 1980s, the original proof of the Classification faced the threat of ‘uninvention’. The papers that constituted it could still be found scattered throughout the mathematical literature, but no one other than the dwindling community of group theorists would know how to find them or how to piece them together. Faced with this problem, finite group theorists resolved to produce a ‘second-generation proof’ to streamline and centralize the Classification. This project highlights that the proof and the community of finite simple groups theorists who produced it were co-constitutive–one formed and reformed by the other.
-
Expand entry
-
Added by: Carl FoxIntroduction: There was once a luck egalitarian school of thought, according to which disadvantage arising due to bad luck was unjust—at the bar of egalitarian justice—while disadvantage arising due to choice was just, at least if the choice was exercised against the background of equal options. “Choice” in this context needed to be “genuine choice”—which, for some, meant “voluntary,” and for others, also “freely willed”—but if it was genuine, then it did not matter whether it was a silly mistake or a considered course of action: if it led to disadvantage, its presence was deemed sufficient to justify leaving the agent to bear the disadvantage. Let's call the view that choice leading to disadvantage is sufficient to justify the disadvantage, at least if choice was exercised against the background of equal options, the inflated view of choice. [...] The inflated view was so crude that in the face of criticism pointing out its crudeness, its supporters have adopted more sophisticated views, and no recent luck egalitarian has defended the crude version. These more sophisticated views recognize that the mere fact that an outcome has been chosen does not make the outcome just—not even by the standards of egalitarian justice alone. In what follows, I will argue that this dominant reading of early luck egalitarianism as committed to the inflated view is, at best, a one-sided interpretation of the iconic writings of the luck egalitarian literature advanced by its most famous proponents, namely Arneson, Cohen, and Dworkin. Their writings did not unambiguously point toward the inflated view; if the early texts were interpreted more charitably, we could have, perhaps, avoided associating luck egalitarianism with the inflated view, arriving immediately at the sophisticated versions of luck egalitarianism dominating the field today.
Comment: Defends luck egalitarianism in general, and the originators of the view in particular, from the common criticism that it is committed to the 'inflated view of choice' which generates unpalatable conclusions because it leaves people who have made choices to bear all the consequences of those choices. Would make good further reading for anyone working on this topic.
-
Expand entry
-
Added by: Carl FoxIntroduction: One of the main tasks that occupies political theorists, and arouses intense debate among them, is the construction of theories—so-called ideal theories—that share a common characteristic: much of what they say offers no immediate or workable solutions to any of the problems our societies face. This feature is not one that theorists strive to achieve but nor can it be described as an accidental one: these theories are constructed in the full knowledge that, whatever else they may offer, much of what they say will not be immediately applicable to the urgent problems of policy and institutional design. Since this may seem puzzling, and has been subjected to severe criticism, the main task of this paper is to ask what is the point of ideal theory and to show the nature of its value. I will also argue that, while the debate over the point of ideal theory can be productive, it will only be so if we avoid treating ideal and nonideal theories as rival approaches to political theory.
Comment: Does a good job of defending ideal theory from prominent criticisms and setting out an account of ideal and non-ideal theory in which they complement one another. Would work as a main text for a lecture or seminar developing the ideal/non-ideal theme, or as further reading for anyone writing about it.
-
Expand entry
-
Added by: Clotilde Torregrossa, Contributed by: Simon FoktPublisher's Note: Just Business provides the first comprehensive, reasoned framework for resolving questions of business ethics and corporate governance. Innovative, accessible, and global in scope, its powerful Ethical Decision Model can be used to manage the ethical problems of business as they arise in all their complexity and variety. Just Business combines business realism with philosophical rigor, and demonstrates that it is not necessary to emasculate or to adulterate business for business to be ethical. The book benefits from Elaine Sternberg's extensive experience as an academic philosopher, an international investment banker, and head of successful businesses. She is now Principal of a London-headquartered consultancy firm, and Research Fellow in Philosophy at the University of Leeds.
Comment:
-
Expand entry
-
Added by: Helen MorleyAbstract: I address questions about values in model-making in engineering, specifically: Might the role of values be attributable solely to interests involved in specifying and using the model? Selected examples illustrate the surprisingly wide variety of things one must take into account in the model-making itself. The notions of system , and physically similar systems are important and powerful in determining what is relevant to an engineering model. Another example illustrates how an idea to completely re-characterize, or reframe, an engineering problem arose during model-making.I employ a qualitative analogue of the notion of physically similar systems. Historical cases can thus be drawn upon; I illustrate with a comparison between a geoengineering proposal to inject, or spray, sulfate aerosols, and two different historical cases involving the spraying of DDT . The current geoengineering proposal is seen to be like the disastrous and counterproductive case, and unlike the successful case, of the spraying of DDT. I conclude by explaining my view that model-making in science is analogous to moral perception in action, drawing on a view in moral theory that has come to be called moral particularism.
Comment: Further reading, particulary in relation to geoengineering responses to climate change. Also of interest in relation to engineering & technology ethics.
-
Expand entry
-
Added by: Nick Novelli, Contributed by: Susan G. SterrettAbstract: On a literal reading of `Computing Machinery and Intelligence'', Alan Turing presented not one, but two, practical tests to replace the question `Can machines think?'' He presented them as equivalent. I show here that the first test described in that much-discussed paper is in fact not equivalent to the second one, which has since become known as `the Turing Test''. The two tests can yield different results; it is the first, neglected test that provides the more appropriate indication of intelligence. This is because the features of intelligence upon which it relies are resourcefulness and a critical attitude to one''s habitual responses; thus the test''s applicablity is not restricted to any particular species, nor does it presume any particular capacities. This is more appropriate because the question under consideration is what would count as machine intelligence. The first test realizes a possibility that philosophers have overlooked: a test that uses a human''s linguistic performance in setting an empirical test of intelligence, but does not make behavioral similarity to that performance the criterion of intelligence. Consequently, the first test is immune to many of the philosophical criticisms on the basis of which the (so-called) `Turing Test'' has been dismissed.
Comment: This paper provides a good analysis of some of the problems with the Turing Test and how they can be avoided. It can be good to use in teaching the classic Turing 1950 paper on the question of whether a computer could be said to 'think' that considers the role of gender in the imitation game version of the test. It could also contribute to an examination of the concept of intelligence, and machine intelligence in particular.
-
Expand entry
-
Added by: Chris Blake-Turner, Contributed by: Susan G. SterrettAbstract: The analogy Darwin drew between artificial and natural selection in "On the Origin of Species" has a detailed structure that has not been appreciated. In Darwin's analogy, the kind of artificial selection called Methodical selection is analogous to the principle of divergence in nature, and the kind of artificial selection called Unconscious selection is analogous to the principle of extinction in nature. This paper argues that it is the analogy between these two different principles familiar from his studies of artificial selection and the two different principles he claims are operative in nature that provides the main structure and force of the analogy he uses to make his case for the power of natural selection to produce new species. Darwin's statements explicitly distinguishing between these two kinds of principles at work in nature occur prominently in the text of the Origin. The paper also shows that a recent revisionist claim that Darwin did not appeal to the efficacy of artificial selection is mistaken
Comment: This paper is useful in discussing Darwin's theory as he presented it, i.e., without a knowledge of genetics. It could also be used in discussing analogy and/or metaphor in science.
-
Expand entry
-
Added by: Barbara Cohn, Contributed by: Georgina StewartAbstract: Goals for adding philosophy to the school curriculum centre on the perceived need to improve the general quality of critical thinking found in society. School philosophy also provides a means for asking questions of value and purpose about curriculum content across and between subjects, and, furthermore, it affirms the capability of children to think philosophically. Two main routes suggested are the introduction of philosophy as a subject, and processes of facilitating philosophical discussions as a way of establishing classroom 'communities of inquiry'. This article analyses the place of philosophy in the school curriculum, drawing on three relevant examples of school curriculum reform: social studies, philosophy of science and Kura Kaupapa Māori.
Comment:
-
Expand entry
-
Added by: Jie GaoPublisher's Note: This book puts forward a radical critique of the foundations of contemporary philosophy of mind, arguing that it relies too heavily on insecure assumptions about the nature of some of the sorts of mental entities it postulates: the nature of events, processes, and states. The book offers an investigation of these three categories, clarifying the distinction between them, and argues specifically that the assumption that states can be treated as particular, event-like entities has been a huge and serious mistake. The book argues that the category of token state should be rejected, and develops an alternative way of understanding those varieties of causal explanation which have sometimes been thought to require an ontology of token states for their elucidation. The book contends that many current theories of mind are rendered unintelligible once it is seen how these explanations really work. A number of prominent features of contemporary philosophy of mind token identity theories, the functionalists conception of causal role, a common form of argument for eliminative materialism, and the structure of the debate about the efficacy of mental content are impugned by the book's arguments. The book concludes that the modern mind-body problem needs to be substantially rethought.
Comment: The aim of this book is to argue that issues in metaphysics - in particular issues about the nature of states and causation - have a significant impact in philosophy of mind.The book has three parts and each part can be used for different purposes for courses on metaphysics or philosophy of mind. The first part constitutes an attack to three highly influential theories of events (the views of Jaegwon Kim, Jonathan Bennett and Lawrence Lombard) and a defence of the view that events are "proper particulars". This part can be used as the main or secondary reading material in an upper-level course on metaphysics on topics of events. The second part defends the view that states are fundamentally different from events, which can be used for teaching on metaphysical theories of states or causal relation. The third part critically examines positions in philosophy of mind - in particular arguments for token-identity, epiphenomenalism, and eliminativism - need reconsideration. This part can be used as further reading materials on debates about those positions in philosophy of mind.
-
Expand entry
-
Added by: Emily PaulPublisher's note: A metaphysics for freedom argues that determinism is incompatible with agency itself--not only the special human variety of agency, but also powers which can be accorded to animal agents. It offers a distinctive, non-dualistic version of libertarianism, rooted in a conception of what biological forms of organisation might make possible in the way of freedom.
Comment: Specific chapters (e.g. 1 and 4) would be useful for an advanced philosophy of mind/action course, but also it could be really nice to read the whole book in a dedicated Masters course - reading and discussing one chapter per seminar. Chapter 1 is especially useful because it outlines Steward's position, 'agency incompatibilism' which it could be useful to have students discuss and compare with classical compatibilism and incompatibilism. Chapter 4 is also a great one to use because it discusses animal agency - this could perhaps come towards the end of an intermediate philosophy of mind course - once students have already learned something about agency when considered in relation to humans.
Comment (from this Blueprint): Steingart is a sociologist who charts the history and sociology of the development of the extremely large and highly collaborative Classification Theorem. She shows that the proof involved a community deciding on shared values, standards of reliability, expertise, and ways of communicating. For example, the community became tolerant of so-called “local errors” so long as these did not put the main result at risk. Furthermore, Steingart discusses how the proof’s text is distributed across a wide number of places and requires expertise to navigate, leaving the proof in danger of uninvention if the experts retire from mathematics.