-
Expand entry
-
Added by: Björn Freter, Contributed by: Johanna ThomaAbstract: There is a long tradition in development economics of collecting original data to test specific hypotheses. Over the last 10 years, this tradition has merged with an expertise in setting up randomized field experiments, resulting in an increasingly large number of studies where an original experiment has been set up to test economic theories and hypotheses. This paper extracts some substantive and methodological lessons from such studies in three domains: incentives, social learning, and time-inconsistent preferences. The paper argues that we need both to continue testing existing theories and to start thinking of how the theories may be adapted to make sense of the field experiment results, many of which are starting to challenge them. This new framework could then guide a new round of experiments.
-
Expand entry
-
Added by: Nick NovelliAbstract: Philosophers of experiment have acknowledged that experiments are often more than mere hypothesis-tests, once thought to be an experiment's exclusive calling. Drawing on examples from contemporary biology, I make an additional amendment to our understanding of experiment by examining the way that `wide' instrumentation can, for reasons of efficiency, lead scientists away from traditional hypothesis-directed methods of experimentation and towards exploratory methods.
Comment: Good exploration of the role of experiments, challenging the idea that they are solely useful for testing clearly defined hypotheses. Uses many practical examples, but is very concise and clear. Suitable for undergraduate teaching in an examination of scientific methods in a philosophy of science course.
-
Expand entry
-
Added by: Sara PeppeAbstract: I propose a framework that explicates and distinguishes the epistemic roles of data and models within empirical inquiry through consideration of their use in scientific practice. After arguing that Suppes' characterization of data models falls short in this respect, I discuss a case of data processing within exploratory research in plant phenotyping and use it to highlight the difference between practices aimed to make data usable as evidence and practices aimed to use data to represent a specific phenomenon. I then argue that whether a set of objects functions as data or models does not depend on intrinsic differences in their physical properties, level of abstraction or the degree of human intervention involved in generating them, but rather on their distinctive roles towards identifying and characterizing the targets of investigation. The paper thus proposes a characterization of data models that builds on Suppes' attention to data practices, without however needing to posit a fixed hierarchy of data and models or a highly exclusionary definition of data models as statistical constructs.
Comment: This article deepens the role of model an data in the scientific investigation taking into account the scientific practice. Obviously, a general framework of the themes the author takes into account is needed.
-
Expand entry
-
Added by: Laura JimenezSummary: It is customary to distinguish experimental from purely observational sciences. The former include physics and molecular biology, the latter astronomy and palaeontology. Surprisingly, mainstream philosophy of science has had rather little to say about the observational/experimental distinction. For example, discussions of confirmation usually invoke a notion of 'evidence', to be contrasted with 'theory' or 'hypothesis'; the aim is to understand how the evidence bears on the hypothesis. But whether this 'evidence' comes from observation or experiment generally plays no role in the discussion; this is true of both traditional and modern confirmation theories, Bayesian and non-Bayesian. In this article, the author sketches one possible explanation, by suggesting that observation and experiment will often differ in their confirmatory power. Based on a simple Bayesian analysis of confirmation, Okasha argues that universal generalizations (or 'laws') are typically easier to confirm by experimental intervention than by pure observation. This is not to say that observational confirmation of a law is impossible, which would be flatly untrue. But there is a general reason why confirmation will accrue more easily from experimental data, based on a simple though oft-neglected feature of Bayesian conditionalization.
Comment: Previous knowledge of Bayesian conditioning might be needed. The article is suitable for postgraduate courses in philosophy of science focusing in the distinction between observational and experimental science.
-
Expand entry
-
Added by: Fenner Stanley TanswellAbstract: The Four-Colour Theorem (4CT) proof, presented to the mathematical community in a pair of papers by Appel and Haken in the late 1970's, provoked a series of philosophical debates. Many conceptual points of these disputes still require some elucidation. After a brief presentation of the main ideas of Appel and Haken’s procedure for the proof and a reconstruction of Thomas Tymoczko’s argument for the novelty of 4CT’s proof, we shall formulate some questions regarding the connections between the points raised by Tymoczko and some Wittgensteinian topics in the philosophy of mathematics such as the importance of the surveyability as a criterion for distinguishing mathematical proofs from empirical experiments. Our aim is to show that the “characteristic Wittgensteinian invention” (Mühlhölzer 2006) – the strong distinction between proofs and experiments – can shed some light in the conceptual confusions surrounding the Four-Colour Theorem.
Comment (from this Blueprint): Secco and Pereira discuss the famous proof of the Four Colour Theorem, which involved the essential use of a computer to check a huge number of combinations. They look at whether this constitutes a real proof or whether it is more akin to a mathematical experiment, a distinction that they draw from Wittgenstein.
Comment: Duflo, of the MIT Poverty Action Lab and recent Nobel Prize Winner, summarizes some of the successes of randomized field evaluations in development economics. She then argues that the way forward for development economics should indeed involve some theorizing, but theorizing on the basis of our new empirical evidence - which might end up looking quite different from standard economic theory. This is a very useful (opinionated) introduction to field experiments for a week on field experiments in a philosophy of economics or philosophy of the social sciences course.