首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The physiologist Claude Bernard was an important nineteenth-century methodologist of the life sciences. Here I place his thought in the context of the history of the vera causa standard, arguably the dominant epistemology of science in the eighteenth and early nineteenth centuries. Its proponents held that in order for a cause to be legitimately invoked in a scientific explanation, the cause must be shown by direct evidence to exist and to be competent to produce the effects ascribed to it. Historians of scientific method have argued that in the course of the nineteenth century the vera causa standard was superseded by a more powerful consequentialist epistemology, which also admitted indirect evidence for the existence and competence of causes. The prime example of this is the luminiferous ether, which was widely accepted, in the absence of direct evidence, because it entailed verified observational consequences and, in particular, successful novel predictions. According to the received view, the vera causa standard's demand for direct evidence of existence and competence came to be seen as an impracticable and needless restriction on the scope of legitimate inquiry into the fine structure of nature. The Mill-Whewell debate has been taken to exemplify this shift in scientific epistemology, with Whewell's consequentialism prevailing over Mill's defense of the older standard. However, Bernard's reflections on biological practice challenge the received view. His methodology marked a significant extension of the vera causa standard that made it both powerful and practicable. In particular, Bernard emphasized the importance of detection procedures in establishing the existence of unobservable entities. Moreover, his sophisticated notion of controlled experimentation permitted inferences about competence even in complex biological systems. In the life sciences, the vera causa standard began to flourish precisely around the time of its alleged abandonment.  相似文献   

2.
In his book, The Material Theory of Induction, Norton argues that the quest for a universal formal theory or ‘schema’ for analogical inference should be abandoned. In its place, he offers the “material theory of analogy”: each analogical inference is “powered” by a local fact of analogy rather than by any formal schema. His minimalist model promises a straightforward, fact-based approach to the evaluation and justification of analogical inferences. This paper argues that although the rejection of universal schemas is justified, Norton's positive theory is limited in scope: it works well only for a restricted class of analogical inferences. Both facts and quasi-formal criteria have roles to play in a theory of analogical reasoning.  相似文献   

3.
Bogen and Woodward's distinction between data and phenomena raises the need to understand the structure of the data-to-phenomena and theory-to-phenomena inferences. I suggest that one way to study the structure of these inferences is to analyze the role of the assumptions involved in the inferences: What kind of assumptions are they? How do these assumptions contribute to the practice of identifying phenomena? In this paper, using examples from atmospheric dynamics, I develop an account of the practice of identifying the target in the data-to-phenomena and theory-to-phenomena inferences in which assumptions about spatiotemporal scales play a central role in the identification of parameters that describe the target system. I also argue that these assumptions are not only empirical but they are also idealizing and abstracting. I conclude the paper with a reflection on the role of idealizations in modeling.  相似文献   

4.
In this paper, I argue that in order to understand the process behind the knowledge production in the historical sciences, we should change our theoretical focus slightly to consider the historical sciences as technoscientific disciplines. If we investigate the intertwinement of technology and theory, we can provide new insights into historical scientific knowledge production, preconditions, and aims. I will provide evidence for my claim by showing the central features of paleontological and paleobiological data practices of the nineteenth and twentieth centuries. In order to work with something that is imperfect and incomplete (the fossil record), paleontologists used different technological devices. These devices process, extract, correct, simulate, and eventually present paleontological explananda. Therefore, the appearance of anatomical features of non-manipulable fossilized organisms, phenomena such as mass-extinctions, or the life-like display of extinct specimens in a museum's hall, depend both on the correct use of technological devices and on the interplay between these devices and theories. Consequently, in order to capture its underlying epistemology, historical sciences should be analyzed and investigated against other technoscientific disciplines such as chemistry, synthetic biology, and nanotechnology, and not necessarily only against classical experimental sciences. This approach will help us understand how historical scientists can obtain their epistemic access to deep time.  相似文献   

5.
This paper offers an epistemological framework for the debate about whether the results of scientific enquiry are inevitable or contingent. I argue in Sections 2 and 3 that inevitabilist stances are doubly guilty of epistemic hubris—a lack of epistemic humility—and that the real question concerns the scope and strength of our contingentism. The latter stages of the paper—Sections 4 and 5—address some epistemological and historiographical worries and sketch some examples of deep contingencies to guide further debate. I conclude by affirming that the concept of epistemic humility can usefully inform critical reflection on the contingency of the sciences and the practice of history of science.  相似文献   

6.
Peter Lipton argues that inference to the best explanation (IBE) involves the selection of a hypothesis on the basis of its loveliness. I argue that in optimal cases of IBE we may be able to eliminate all but one of the hypotheses. In such cases we have a form of eliminative induction takes place, which I call ‘Holmesian inference’. I argue that Lipton’s example in which Ignaz Semmelweis identified a cause of puerperal fever better illustrates Holmesian inference than Liptonian IBE. I consider in detail the conditions under which Holmesian inference is possible and conclude by considering the epistemological relations between Holmesian inference and Liptonian IBE.  相似文献   

7.
I take the phrase “the theory of nonlinear oscillations” to identify a historical phenomenon. Under this heading a powerful school in Soviet science, L. I. Mandelstam's school, developed its version of what was later called “nonlinear dynamics”. The theory of nonlinear oscillations was formed around the concept of self-oscillations, which was elaborated by Mandelstam's graduate student A. A. Andronov. This concept determined the paradigm of the theory of nonlinear oscillations as well as its ideology, that is, a set of characteristic ideas which, together with the corresponding examples and analogues, allowed the expansion of the theory into associated areas where it indicated new interesting phenomena and posed new problems. It was the ideology that made possible the broader application of the theory of nonlinear oscillations, whose domain was originally lumped systems, to continuous media and its subsequent progress toward synergetics. In the course of its ideological application, the concept of self-oscillations was greatly extended, became vague and diffuse, and related concepts such as self-waves and self-structures appeared.  相似文献   

8.
I address questions about values in model-making in engineering, specifically: Might the role of values be attributable solely to interests involved in specifying and using the model? Selected examples illustrate the surprisingly wide variety of things one must take into account in the model-making itself. The notions of system (as used in engineering thermodynamics), and physically similar systems (as used in the physical sciences) are important and powerful in determining what is relevant to an engineering model. Another example (windfarms) illustrates how an idea to completely re-characterize, or reframe, an engineering problem arose during model-making.I employ a qualitative analogue of the notion of physically similar systems. Historical cases can thus be drawn upon; I illustrate with a comparison between a geoengineering proposal to inject, or spray, sulfate aerosols, and two different historical cases involving the spraying of DDT (fire ant eradication; malaria eradication). The current geoengineering proposal is seen to be like the disastrous and counterproductive case, and unlike the successful case, of the spraying of DDT. I conclude by explaining my view that model-making in science is analogous to moral perception in action, drawing on a view in moral theory that has come to be called moral particularism.  相似文献   

9.
Though it is held that some models in science have explanatory value, there is no conclusive agreement on what provides them with this value. One common view is that models have explanatory value vis-à-vis some target systems because they are developed using an abstraction process (i.e., a process which involves omitting features). Though I think this is correct, I believe it is not the whole picture. In this paper, I argue that, in addition to the well-known process of abstraction understood as an omission of features or information, there is also a family of abstraction processes that involve aggregation of features or information and that these processes play an important role in endowing the models they are used to build with explanatory value. After offering a taxonomy of abstraction processes involving aggregation, I show by considering in detail several models drawn from different sciences that the abstraction processes involving aggregation that are used to build these models are responsible (at least partially) for their having explanatory value.  相似文献   

10.
I argue that Norton’s reconstruction of an example inductive inference in Ch.1 of The Material Theory of Induction mischaracterizes scientific induction by treating the ampliation as separable (and separated) from that inherent in conceptualization per se—effectively severing the putative ampliation from its putative warrant. I sketch an alternative analysis in which ampliation and warrant remain closely coupled through a common root in the primary conceptualizations.  相似文献   

11.
I argue that we should consider Norton's material theory of induction as consisting of two largely independent claims. First, there is the claim that material facts license inductions - a claim which I interpret as a type of contextualism about induction. Second, there is the claim that there are no universal rules of induction. While a good case can be made for the first claim, I believe that Norton's arguments for the second claim are lacking. In particular, I spell out Norton's argument against the claim that all induction may be reduced to inference to the best explanation, and argue that it is not persuasive. Rejecting this part of Norton's theory does not however require us to abandon the first claim that material facts license inductions. In this way, I distinguish the parts of the material theory of induction we should happily accept from the parts about which we should be more skeptical.  相似文献   

12.
Quine is routinely perceived as having changed his mind about the scope of the Duhem-Quine thesis, shifting from what has been called an 'extreme holism' to a more moderate view. Where the Quine of 'Two Dogmas of Empiricism' argues that “the unit of empirical significance is the whole of science” (1951, 42), the later Quine seems to back away from this “needlessly strong statement of holism” (1991, 393). In this paper, I show that the received view is incorrect. I distinguish three ways in which Quine's early holism can be said to be wide-scoped and show that he has never changed his mind about any one of these aspects of his early view. Instead, I argue that Quine's apparent change of mind can be explained away as a mere shift of emphasis.  相似文献   

13.
Work throughout the history and philosophy of biology frequently employs ‘chance’, ‘unpredictability’, ‘probability’, and many similar terms. One common way of understanding how these concepts were introduced in evolution focuses on two central issues: the first use of statistical methods in evolution (Galton), and the first use of the concept of “objective chance” in evolution (Wright). I argue that while this approach has merit, it fails to fully capture interesting philosophical reflections on the role of chance expounded by two of Galton's students, Karl Pearson and W.F.R. Weldon. Considering a question more familiar from contemporary philosophy of biology—the relationship between our statistical theories of evolution and the processes in the world those theories describe—is, I claim, a more fruitful way to approach both these two historical actors and the broader development of chance in evolution.  相似文献   

14.
In this paper, I argue that recent debates about Newton’s attitude toward action at a distance have been hampered by a lack of conceptual clarity. To clarify the metaphysical background of the debates, I distinguish three kinds of causes within Newton’s work: mechanical, dynamical, and substantial causes. This threefold distinction enables us to recognize that although Newton clearly regards gravity as an impressed force that operates across vast distances, he denies that this commitment requires him to think that some substance acts at a distance on another substance. (Dynamical causation is distinct from substantial causation.) Newton’s denial of substantial action at a distance may strike his interpreters as questionable, so I provide an argument to show that it is in fact acceptable.  相似文献   

15.
Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may converge upon a correct estimate or not depending on the social structure of the community that uses it. Based on this study, I argue that methodological explanations of the “replicability crisis” in psychology are limited and propose an alternative explanation in terms of biases. Finally, I conclude suggesting that scientific self-correction should be understood as an interaction effect between inference methods and social structures.  相似文献   

16.
In earlier work, I predicted that we would probably not be able to determine the colors of the dinosaurs. I lost this epistemic bet against science in dramatic fashion when scientists discovered that it is possible to draw inferences about dinosaur coloration based on the microstructure of fossil feathers (Vinther et al., 2008). This paper is an exercise in philosophical error analysis. I examine this episode with two questions in mind. First, does this case lend any support to epistemic optimism about historical science? Second, under what conditions is it rational to make predictions about what questions scientists will or will not be able answer? In reply to the first question, I argue that the recent work on the colors of the dinosaurs matters less to the debate about the epistemology of historical science than it might seem. In reply to the second question, I argue that it is difficult to specify a policy that would rule out the failed bet without also being too conservative.  相似文献   

17.
Epigenesis has become a far more exciting issue in Kant studies recently, especially with the publication of Jennifer Mensch's Kant’ Organicism. In my commentary, I propose to clarify my own position on epigenesis relative to that of Mensch and others by once again considering the discourse of epigenesis in the wider eighteenth century. Historically, I maintain that Kant was never fully an epigenesist because he feared its materialist implications. This makes it highly unlikely that he drew heavily, as other interpreters like Dupont and Huneman have suggested, on Caspar Friedrich Wolff for his ultimate theory of “generic preformation.” In order to situate more precisely what Kant made of epigenesis, I distinguish his metaphysical use, as elaborated by Mensch, from his view of it as a theory for life science. In that light, I raise questions about the scope and authority of philosophy vis a vis natural science.  相似文献   

18.
I examine Popper’s claims about Newton’s use of induction in Principia with the actual contents of Principia and draw two conclusions. Firstly, in common with most other philosophers of his generation, it appears that Popper had very little acquaintance with the contents and methodological complexities of Principia beyond what was in the famous General Scholium. Secondly Popper’s ideas about induction were less sophisticated than those of Newton, who recognised that it did not provide logical proofs of the results obtained using it, because of the possibilities of later, contrary evidence. I also trace the historical background to commonplace misconceptions about Newton’s method.  相似文献   

19.
Inferences from scientific success to the approximate truth of successful theories remain central to the most influential arguments for scientific realism. Challenges to such inferences, however, based on radical discontinuities within the history of science, have motivated a distinctive style of revision to the original argument. Conceding the historical claim, selective realists argue that accompanying even the most revolutionary change is the retention of significant parts of replaced theories, and that a realist attitude towards the systematically retained constituents of our scientific theories can still be defended. Selective realists thereby hope to secure the argument from success against apparent historical counterexamples. Independently of that objective, historical considerations have inspired a further argument for selective realism, where evidence for the retention of parts of theories is itself offered as justification for adopting a realist attitude towards them. Given the nature of these arguments from success and from retention, a reasonable expectation is that they would complement and reinforce one another, but although several theses purport to provide such a synthesis the results are often unconvincing. In this paper I reconsider the realist’s favoured type of scientific success, novel success, offer a revised interpretation of the concept, and argue that a significant consequence of reconfiguring the realist’s argument from success accordingly is a greater potential for its unification with the argument from retention.  相似文献   

20.
In this paper we take a close look at current interdisciplinary modeling practices in the environmental sciences, and suggest that closer attention needs to be paid to the nature of scientific practices when investigating and planning interdisciplinarity. While interdisciplinarity is often portrayed as a medium of novel and transformative methodological work, current modeling strategies in the environmental sciences are conservative, avoiding methodological conflict, while confining interdisciplinary interactions to a relatively small set of pre-existing modeling frameworks and strategies (a process we call crystallization). We argue that such practices can be rationalized as responses in part to cognitive constraints which restrict interdisciplinary work. We identify four salient integrative modeling strategies in environmental sciences, and argue that this crystallization, while contradicting somewhat the novel goals many have for interdisciplinarity, makes sense when considered in the light of common disciplinary practices and cognitive constraints. These results provide cause to rethink in more concrete methodological terms what interdisciplinarity amounts to, and what kinds of interdisciplinarity are obtainable in the environmental sciences and elsewhere.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号