首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Descartes is always concerned about knowledge. However, the Galileo affair in 1633, the reactions to his Discourse on method, and later his need to reply to objections to his Meditations provoked crises in Descartes’s intellectual development the import of which has not been sufficiently recognized. These events are the major reasons why Descartes’s philosophical position concerning how we know and what we may know is radically different at the end of his life from what it was when he began. We call this later position Descartes’s epistemic stance and contrast it with his earlier methodological, metaphysical realism. Yet Descartes’s epistemic views cannot be separated from other aspects of his work, for example, his views concerning God, causality, metaphysics, and the nature of science. A further meta-implication is that serious errors await any scholar who cites early Cartesian texts in support of late Cartesian positions, or who uses later texts in conjunction with early ones to support a reading of Descartes’s philosophy.  相似文献   

2.
Kuhn argued against both the correspondence theory of truth and convergent realism. Although he likely misunderstood the nature of the correspondence theory, which it seems he wrongly believed to be an epistemic theory, Kuhn had an important epistemic point to make. He maintained that any assessment of correspondence between beliefs and reality is not possible, and therefore, the acceptance of beliefs and the presumption of their truthfulness has to be decided on the basis of other criteria. I will show that via Kuhn’s suggested epistemic values, specifically via problem-solving, his philosophy can be incorporated into a coherentist epistemology. Further, coherentism is, in principle, compatible with convergent realism. However, an argument for increasing likeness to truth requires appropriate historical continuity. Kuhn maintained that the history of science is full of discontinuity, and therefore, the historical condition of convergent realism is not satisfied.  相似文献   

3.
‘Epistemic structural realism’ (ESR) insists that all that we know of the world is its structure, and that the ‘nature’ of the underlying elements remains hidden. With structure represented via Ramsey sentences, the question arises as to how ‘hidden natures’ might also be represented. If the Ramsey sentence describes a class of realisers for the relevant theory, one way of answering this question is through the notion of multiple realisability. We explore this answer in the context of the work of Carnap, Hintikka and Lewis. Both Carnap and Hintikka offer clear structuralist perspectives which, crucially, accommodate the openness inherent in theory change. Unfortunately there is little purchase for a viable form of realism in either case. Lewis’s approach, on the other hand, offers more scope for realism but, as we shall see, concerns arise as to whether a relevant form of structuralism can be maintained. In particular his thesis of Ramseyan humility undermines certain conceptions of scientific laws that the structural realist might naturally cleave to. Our overall conclusion is that the representational device of Ramsey sentence plus multiple realisability can accommodate either the structuralist or realist aspects of ESR but has difficulties capturing both.  相似文献   

4.
Between 1940 and 1945, while still a student of theoretical physics and without any contact with the history of science, Thomas S. Kuhn developed a general outline of a theory of the role of belief in science. This theory was well rooted in the philosophical tradition of Emerson Hall, Harvard, and particularly in H. M. Sheffer’s and C. I. Lewis’s logico-philosophical works—Kuhn was, actually, a graduate student of the former in 1945. In this paper I reconstruct the development of that general outline after Kuhn’s first years at Harvard. I examine his works on moral and aesthetic issues—where he displayed an already ‘anti-Whig’ stance concerning historiography—as well as his first ‘Humean’ approach to science and realism, where his earliest concern with belief is evident. Then I scrutinise his graduate work to show how his first account of the role of belief was developed. The main aim of this paper is to show that the history of science illustrated for Kuhn the epistemic role and effects of belief he had already been theorising about since around 1941.  相似文献   

5.
In this paper, I consider Kitcher’s (1993) account of reference for the expressions of past science. Kitcher’s case study is of Joseph Priestley and his expression ‘dephlogisticated air’. There is a strong intuitive case that ‘dephlogisticated air’ referred to oxygen, but it was underpinned by very mistaken phlogiston theory, so concluding either that dephlogisticated air referred straightforwardly or that it failed to refer both have unpalatable consequences. Kitcher argues that the reference of such terms is best considered relative to each token—some tokens refer, and others do not. His account thus relies crucially on how this distinction between tokens can be made good—a puzzle I call the discrimination problem. I argue that the discrimination problem cannot be solved. On any reading of Kitcher’s defence of the distinction, the grounds provided are either insufficient or illegitimate. On the first reading, Kitcher violates the principle of humanity by making Priestley’s referential success a matter of the mental contents of modern speakers. The second reading sidesteps the problem of beliefs by appealing to mind-independent facts, but I argue that these are insufficient to achieve reference because of the indeterminacy introduced by the qua problem. On the third and final reading, Priestley’s success is given by what he would say in counterfactual circumstances. I argue that even if there are facts about what Priestley would say, and there is reason for doubt, there is no motivation to think that such facts determine how Priestley referred in the actual world.  相似文献   

6.
In this paper I argue that the Strong Programme’s aim to provide robust explanations of belief acquisition is limited by its commitment to the symmetry principle. For Bloor and Barnes, the symmetry principle is intended to drive home the fact that epistemic norms are socially constituted. My argument here is that even if our epistemic standards are fully naturalized—even relativized—they nevertheless can play a pivotal role in why individuals adopt the beliefs that they do. Indeed, sometimes the fact that a belief is locally endorsed as rational is the only reason why an individual holds it. In this way, norms of rationality have a powerful and unique role in belief formation. But if this is true then the symmetry principle’s emphasis on ‘sameness of type’ is misguided. It has the undesirable effect of not just naturalizing our cognitive commitments, but trivializing them. Indeed, if the notion of ‘similarity’ is to have any content, then we are not going to classify as ‘the same’ beliefs that are formed in accordance with deeply entrenched epistemic norms as ones formed without reflection on these norms, or ones formed in spite of these norms. My suggestion here is that we give up the symmetry principle in favor of a more sophisticated principle, one that allows for a taxonomy of causes rich enough to allow us to delineate the unique impact epistemic norms have on those individuals who subscribe to them.  相似文献   

7.
Quantum mechanics is a theory whose foundations spark controversy to this day. Although many attempts to explain the underpinnings of the theory have been made, none has been unanimously accepted as satisfactory. Fuchs has recently claimed that the foundational issues can be resolved by interpreting quantum mechanics in the light of quantum information. The view proposed is that quantum mechanics should be interpreted along the lines of the subjective Bayesian approach to probability theory. The quantum state is not the physical state of a microscopic object. It is an epistemic state of an observer; it represents subjective degrees of belief about outcomes of measurements. The interpretation gives an elegant solution to the infamous measurement problem: measurement is nothing but Bayesian belief updating in a analogy to belief updating in a classical setting. In this paper, we analyze an argument that Fuchs gives in support of this latter claim. We suggest that the argument is not convincing since it rests on an ad hoc construction. We close with some remarks on the options left for Fuchs’ quantum Bayesian project.  相似文献   

8.
Otto Neurath’s thoroughgoing anti-foundationalism is connected to the recognition that protocol sentences are not inviolable, that is they are fallible and their choice cannot be determined: ‘Poincaré, Duhem and others have adequately shown that even if we have agreed on the protocol statements, there is a not limited number of equally applicable, possible systems of hypotheses. We have extended this tenet of the uncertainty of systems of hypotheses to all statements, including protocol statements that are alterable in principle’ (Neurath, 1983, p. 105). Later historiography has called Neurath’s extension of Duhemian holism the Neurath principle. Based on a study of Neurath’s early works on the history of optics, the paper investigates a previously unnoticed influence on the development of this principle, Neurath’s reading of Goethe’s Theory of colours. The historical and polemical parts of Goethe’s tripartite book provided Neurath with ideal examples for the vertical extension of Duhem’s thesis to observation statements. Moreover, Goethe’s critique of the language of science and his views on the theory-ladenness of observation, as well as on the history of science show strong parallels to many of Neurath’s ideas. These demonstrate the existence of surprisingly direct textual links between Romantic views on science and the development of twentieth-century philosophy of science. Neurath’s usage of Goethe’s examples also indicates that the birth of the Neurath principle is more tightly connected to actual scientific practice than to theory-testing, and that by admitting the theory-ladenness of observation reports and fallibility of protocol statements Neurath does not throw empiricism overboard.  相似文献   

9.
Programming languages are, at the same time, instruments and communicative artifacts that evolve rapidly through use. In this paper I describe an online computing platform called BioBike. BioBike is a trading zone where biologists and programmers collaborate in the development of an extended vocabulary and functionality for computational genomics. In the course of this work they develop interactional expertise with one another’s domains. The extended BioBike vocabulary operates on two planes: as a working programming language, and as a pidgin in the conversation between the biologists and engineers. The flexibility that permits this community to dynamically extend BioBike’s working vocabulary—to form new pidgins—makes BioBike unique among computational tools, which usually are not themselves adapted through the collaborations that they facilitate. Thus BioBike is itself a crucial feature—which it is tempting to refer to as a participant—in the developing interaction.  相似文献   

10.
Whereas it is well established that Aristotle allowed the possibility of error in some observations, it is often held that he was an infallibilist with respect to normal observational beliefs. We shall argue against this interpretation of Aristotle, and in particular show that it is not implicit in his view that observation is the ‘ultimate arbiter of truth’.  相似文献   

11.
By the middle of the eighteenth century the new science had challenged the intellectual primacy of common experience in favor of recondite, expert and even counter-intuitive knowledge increasingly mediated by specialized instruments. Meanwhile modern philosophy had also problematized the perceptions of common experience — in the case of David Hume this included our perception of causal relations in nature, a fundamental precondition of scientific endeavor.In this article I argue that, in responding to the ‘problem of induction’ as advanced by Hume, Reid reformulated Aristotelian foundationalism in distinctly modern terms. An educator and mathematician self-consciously working within the framework of the new science, Reid articulated a philosophical foundation for natural knowledge anchored in the human constitution and in processes of adjudication in an emerging modern public sphere of enlightened discourse. Reid thereby transformed one of the bases of Aristotelian science — common experience — into a philosophically and socially justified notion of ‘common sense’. Reid's intellectual concerns had as much to do with the philosophy of science as they did with moral philosophy or epistemology proper, and were bound up with wider social and scientific changes taking place in the early modern period.  相似文献   

12.
13.
Recent work in the history of philosophy of science details the Kantianism of philosophers often thought opposed to one another, e.g., Hans Reichenbach, C.I. Lewis, Rudolf Carnap, and Thomas Kuhn. Historians of philosophy of science in the last two decades have been particularly interested in the Kantianism of Reichenbach, Carnap, and Kuhn, and more recently, of Lewis. While recent historical work focuses on recovering the threatened-to-be-forgotten Kantian themes of early twentieth-century philosophy of science, we should not elide the differences between the Kantian strands running throughout this work. In this paper, I disentangle a few of these strands in the work of Reichenbach and Lewis focusing especially on their theories of relativized, constitutive a priori principles in empirical knowledge. In particular, I highlight three related differences between Reichenbach and Lewis concerning their motivations in analyzing scientific knowledge and scientific practice, their differing conceptions of constitutivity, and their relativization of constitutive a priori principles. In light of these differences, I argue Lewis's Kantianism is more similar to Kuhn's Kantianism than Reichenbach's, and so might be of more contemporary relevance to social and practice-based approaches to the philosophy of science.  相似文献   

14.
Recent work in the Everett interpretation has suggested that the problem of probability can be solved by understanding probability in terms of rationality. However, there are two problems relating to probability in Everett—one practical, the other epistemic—and the rationality-based program directly addresses only the practical problem. One might therefore worry that the problem of probability is only ‘half solved’ by this approach. This paper aims to dispel that worry: a solution to the epistemic problem follows from the rationality-based solution to the practical problem.  相似文献   

15.
We can distinguish ‘mechanical’ in the strict sense of the mechanical philosophers from ‘mechanical’ in the common sense. My claim is that Boyle's experimental science owed nothing to, and offered no support for, the mechanical philosophy in the strict sense. The attempts by my critics to undermine my case involve their interpreting ‘mechanical’ in something like the common sense. I certainly accept that Boyle's experimental science was productively informed by mechanical analogies, where ‘mechanical’ is interpreted in a common sense. But this leaves my original claim untouched and, in the main, unchallenged.  相似文献   

16.
We examine to what extent an adequate ontology of technical artefacts can be based on existing general accounts of the relation between higher-order objects and their material basis. We consider two of these accounts: supervenience and constitution. We take as our starting point the thesis that artefacts have a ‘dual nature’, that is, that they are both material bodies and functional objects. We present two criteria for an adequate ontology of artefacts, ‘Underdetermination’ (UD) and ‘Realizability Constraints’ (RC), which address aspects of the dual nature thesis. Assessing supervenience accounts, we find them either wanting with respect to these criteria or insufficiently informative. Next, we argue that a recent application of Lynne Rudder Baker’s constitution view to artefacts cannot (yet) meet our criteria, although the broader view leaves room for improvement. Based on our evaluation of the most promising candidates, we conclude that so far general metaphysical views fail to address the most salient features of artefacts. Although they can account for the fact that artefacts have a ‘dual nature’, they do not offer the conceptual resources needed to describe the relation between these natures; this relation raises a hard problem in metaphysics.  相似文献   

17.
The outputs of economic forecasting—predictions for national economic indicators such as GDP, unemployment rates and inflation—are all highly visible. The production of these forecasts is a much more private affair, however, typically being thought of as the work of individual forecasters or forecast teams using their economic model to produce a forecast that is then made public. This conception over-emphasises the individual and the technical whilst silencing the broader social context through which economic forecasters develop the expertise that is essential for the credibility of their predictions. In particular, economic forecasts are given meaning and fine-tuned through the social and institutional networks that give forecasters access to the expertise of a heterogeneous mix of academics, policy-makers and business people. Within these broader groups, individual forecasters often create private forecast ‘clubs’, where subscribers have privileged access to the expertise of the economist, but where the forecasters also have privileged access to their clients’ own expert knowledge. In examining these aspects of the forecasters’ work I show that the visible and audible activities of modelling and forecasting are made possible and plausible by virtue of the modeller’s invisible interaction with a wider network.  相似文献   

18.
This paper puts forward the hypothesis that the distinctive features of quantum statistics are exclusively determined by the nature of the properties it describes. In particular, all statistically relevant properties of identical quantum particles in many-particle systems are conjectured to be irreducible, ‘inherent’ properties only belonging to the whole system. This allows one to explain quantum statistics without endorsing the ‘Received View’ that particles are non-individuals, or postulating that quantum systems obey peculiar probability distributions, or assuming that there are primitive restrictions on the range of states accessible to such systems. With this, the need for an unambiguously metaphysical explanation of certain physical facts is acknowledged and satisfied.  相似文献   

19.
20.
In this paper I inquire into Bogen and Woodward’s (1988) data/phenomena distinction, which in a similar way to Cartwright’s construal of the model of superconductivity (1995)—although in a different domain—argues for a ‘bottom-up’ construction of phenomena from data without the involvement of theory. I criticise Bogen and Woodward’s account by analysing their melting point of lead example in depth, which is usually cited in the literature to illustrate the data/phenomenon distinction. Yet, the main focus of this paper lies on Matthias Kaiser’s (1995) case study of the plate tectonic revolution, the most extensive case study that has been put forth to support the bottom-up construction of phenomena. On the basis of new historical evidence, which has been overlooked not only by Kaiser but also by the entire historical literature on the plate tectonic revolution, I demonstrate that phenomena are not constructed from the bottom-up but rather, admittedly counter-intuitively, from the top-down.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号