首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This article examines the problem of the origins of the correspondence principle formulated by Bohr in 1920 and intends to test the correctness of the argument that the essential elements of that principle were already present in the 1913 “trilogy”. In contrast to this point of view, moreover widely shared in the literature, this article argues that it is possible to find a connection between the formulation of the correspondence principle and the assessment that led Bohr to abandon the search for a Planck-type theory. In fact, a thorough examination of Bohr’s works shows that the birth of this principle coincided with the depletion of a research program whose origins may date back to Bohr’s stay at the Rutherford’s laboratory (summer 1912). Finally, this article argues that original program of research was abandoned when it became clear that Planck’s quantum hypothesis for the harmonic oscillator was not an adequate support for the theoretical architecture of atomic physics; namely, there was evidence enough to justify a most drastic conclusion, according to Bohr: “I do not think that a theory of the Planck type can be made logical consistent”.  相似文献   

2.
Computational neuroscientists not only employ computer models and simulations in studying brain functions. They also view the modeled nervous system itself as computing. What does it mean to say that the brain computes? And what is the utility of the ‘brain-as-computer’ assumption in studying brain functions? In previous work, I have argued that a structural conception of computation is not adequate to address these questions. Here I outline an alternative conception of computation, which I call the analog-model. The term ‘analog-model’ does not mean continuous, non-discrete or non-digital. It means that the functional performance of the system simulates mathematical relations in some other system, between what is being represented. The brain-as-computer view is invoked to demonstrate that the internal cellular activity is appropriate for the pertinent information-processing (often cognitive) task.  相似文献   

3.
In this paper, I consider Kitcher’s (1993) account of reference for the expressions of past science. Kitcher’s case study is of Joseph Priestley and his expression ‘dephlogisticated air’. There is a strong intuitive case that ‘dephlogisticated air’ referred to oxygen, but it was underpinned by very mistaken phlogiston theory, so concluding either that dephlogisticated air referred straightforwardly or that it failed to refer both have unpalatable consequences. Kitcher argues that the reference of such terms is best considered relative to each token—some tokens refer, and others do not. His account thus relies crucially on how this distinction between tokens can be made good—a puzzle I call the discrimination problem. I argue that the discrimination problem cannot be solved. On any reading of Kitcher’s defence of the distinction, the grounds provided are either insufficient or illegitimate. On the first reading, Kitcher violates the principle of humanity by making Priestley’s referential success a matter of the mental contents of modern speakers. The second reading sidesteps the problem of beliefs by appealing to mind-independent facts, but I argue that these are insufficient to achieve reference because of the indeterminacy introduced by the qua problem. On the third and final reading, Priestley’s success is given by what he would say in counterfactual circumstances. I argue that even if there are facts about what Priestley would say, and there is reason for doubt, there is no motivation to think that such facts determine how Priestley referred in the actual world.  相似文献   

4.
Reflection on the method of science has become increasingly thinner since Kant. If there's any upshot of that part of modern philosophy, it's that the scientists didn't have a secret. There isn't something there that's either effable or ineffable. To understand how they do what they do is pretty much like understanding how any other bunch of skilled craftsmen do what they do. Kuhn's reduction of philosophy of science to sociology of science doesn't point to an ineffable secret of success; it leaves us without the notion of the secret of success.Relativism is the view that every belief on a certain topic, or perhaps, about any topic, is as good as every other. No one holds this view. Except for the occasional co-operative freshman, one cannot find anybody who says that two incompatible opinions on an important topic are equally good. The philosophers who get called ‘relativists’ are those who say that the grounds for choosing between such opinions are less algorithmic than had been thought.Richard Rorty1,2  相似文献   

5.
This article seeks to provide a historically well-informed analysis of an important post-Newtonian area of research in experimental physics between 1798 and 1898, namely the determination of the mean density of the earth and, by the end of the nineteenth century, the gravitational constant. Traditionally, research on these matters is seen as a case of “puzzle solving.” In this article, the author shows that such focus does not do justice to the evidential significance of eighteenth- and nineteenth-century experimental research on the mean density of the earth and the gravitational constant. As Newton’s theory of universal gravitation was mainly based on astronomical observation, it remained to be shown that Newton’s law of universal gravitation did not break down at terrestrial distances. In this context, Cavendish’ experiment and related nineteenth-century experiments played a decisive role, for they provided converging and increasingly stronger evidence for the universality of Newton’s theory of gravitation. More precisely, the author shall argue that, as the accuracy and precision of the experimental apparatuses and the procedures to eliminate external disturbances involved increasingly improved, the empirical support for the universality of Newton’s theory of gravitation improved correspondingly.  相似文献   

6.
Part of the distinction between artefacts, objects made by humans for particular purposes, and natural objects is that artefacts are subject to normative judgements. A drill, say, can be a good drill or a poor drill, it can function well or correctly or it can malfunction. In this paper I investigate how such judgements fit into the domain of the normative in general and what the grounds for their normativity are. Taking as a starting point a general characterization of normativity proposed by Dancy, I argue how statements such as ‘this is a good drill’ or ‘this drill is malfunctioning’ can be seen to express normative facts, or the content of normative statements. What they say is that a user who has a desire to achieve a particular relevant outcome has a reason to use, or not to use, the artefact in question. Next this analysis is extended to show that not just statements that say that an artefact performs its function well or poorly, but all statements that ascribe a function to an artefact can be seen as expressing a normative fact. On this approach the normativity of artefacts is analyzed in terms of reasons on grounds of practical, and to a lesser extent theoretical, rationality. I close by investigating briefly to what extent reasons on moral grounds are, in the analysis adopted here, involved in the normativity of artefacts.  相似文献   

7.
Georg Cantor, the founder of set theory, cared much about a philosophical foundation for his theory of infinite numbers. To that end, he studied intensively the works of Baruch de Spinoza. In the paper, we survey the influence of Spinozean thoughts onto Cantor’s; we discuss Spinoza’s philosophy of infinity, as it is contained in his Ethics; and we attempt to draw a parallel between Spinoza’s and Cantor’s ontologies. Our conclusion is that the study of Spinoza provides deepening insights into Cantor’s philosophical theory, whilst Cantor can not be called a ‘Spinozist’ in any stricter sense of that word.  相似文献   

8.
This paper criticizes the traditional philosophical account of the quantization of gauge theories and offers an alternative. On the received view, gauge theories resist quantization because they feature distinct mathematical representatives of the same physical state of affairs. This resistance is overcome by a sequence of ad hoc modifications, justified in part by reference to semiclassical electrodynamics. Among other things, these modifications introduce ”ghosts”: particles with unphysical properties which do not appear in asymptotic states and which are said to be purely a notational convenience. I argue that this sequence of modifications is unjustified and inadequate, making it a poor basis for the interpretation of ghosts. I then argue that gauge theories can be quantized by the same method as any other theory. On this account, ghosts are not purely notation: they are coordinates on the classical configuration space of the theory—specifically, on its gauge structure. This interpretation does not fall prey to the standard philosophical arguments against the significance of ghosts, due to Weingard. Weingard’s argumentative strategy, properly applied, in fact tells in favor of ghosts’ physical significance.  相似文献   

9.
The aim of this paper is to put in place some cornerstones in the foundations for an objective theory of confirmation by considering lessons from the failures of predictivism. Discussion begins with a widely accepted challenge, to find out what is needed in addition to the right kind of inferential–semantical relations between hypothesis and evidence to have a complete account of confirmation, one that gives a definitive answer to the question whether hypotheses branded as “post hoc monsters” can be confirmed. The predictivist view is then presented as a way to meet this challenge. Particular attention is paid to Worrall’s version of predictivism, as it appears to be the most sophisticated of the lot. It is argued that, despite its faults, his view turns our heads in the right direction by attempting to remove contingent considerations from confirmational matters. The demand to remove such considerations becomes the first of four cornerstones. Each cornerstone is put in place with the aim to steer clear of the sort of failures that plague various kinds of predictivism. In the process, it becomes obvious that the original challenge is wrongheaded and in need of revision. The paper ends with just such a revision.  相似文献   

10.
The paper argues that Helen Longino’s pluralism implies circularity as it claims a preferably high number of qualified contributions to any scientific discussion that aims for objectivity, but does not regard the question who or what sets and employs the standards that rule the decision who is qualified to contribute and who is not. Therefore, objectivity is premised for a process that is to generate that very objectivity. Philip Kitcher’s ideal of democratization of science seems only to bypass the problem by introducing ideal deliberators tutored by appropriate experts, as for the implementation of this ideal the deliberators and experts, again, would have to be appointed by someone. However, Kitcher’s approach is based on a Rawlsian egalitarism and in this sense calls for political intrusion which could be based on case-by-case decisions. This offers a solution. I will illuminate the problem by some examples from climatology and demonstrate how Kitcher’s approach can help to tackle the problem by a final case study of pluralism in the Intergovernmental Panel on Climate Change.  相似文献   

11.
Extensional scientific realism is the view that each believable scientific theory is supported by the unique first-order evidence for it and that if we want to believe that it is true, we should rely on its unique first-order evidence. In contrast, intensional scientific realism is the view that all believable scientific theories have a common feature and that we should rely on it to determine whether a theory is believable or not. Fitzpatrick argues that extensional realism is immune, while intensional realism is not, to the pessimistic induction. I reply that if extensional realism overcomes the pessimistic induction at all, that is because it implicitly relies on the theoretical resource of intensional realism. I also argue that extensional realism, by nature, cannot embed a criterion for distinguishing between believable and unbelievable theories.  相似文献   

12.
Thomas Kuhn and Paul Feyerabend promote incommensurability as a central component of their conflicting accounts of the nature of science. This paper argues that in so doing, they both develop Albert Einstein's views, albeit in different directions. Einstein describes scientific revolutions as conceptual replacements, not mere revisions, endorsing ‘Kant-on-wheels’ metaphysics in light of ‘world change’. Einstein emphasizes underdetermination of theory by evidence, rational disagreement in theory choice, and the non-neutrality of empirical evidence. Einstein even uses the term ‘incommensurable’ specifically to apply to challenges posed to comparatively evaluating scientific theories in 1949, more than a decade before Kuhn and Feyerabend. This analysis shows how Einstein anticipates substantial components of Kuhn and Feyerabend's views, and suggests that there are strong reasons to suspect that Kuhn and Feyerabend were directly inspired by Einstein's use of the term ‘incommensurable’, as well as his more general methodological and philosophical reflections.  相似文献   

13.
The view that the fundamental kind properties are intrinsic properties enjoys reflexive endorsement by most metaphysicians of science. But ontic structural realists deny that there are any fundamental intrinsic properties at all. Given that structuralists distrust intuition as a guide to truth, and given that we currently lack a fundamental physical theory that we could consult instead to order settle the issue, it might seem as if there is simply nowhere for this debate to go at present. However, I will argue that there exists an as-yet untapped resource for arguing for ontic structuralism – namely, the way that fundamentality is conceptualized in our most fundamental physical frameworks. By arguing that physical objects must be subject to the ‘Goldilock's principle’ if they are to count as fundamental at all, I argue that we can no longer view the majority of properties defining them as intrinsic. As such, ontic structural realism can be regarded as the most promising metaphysics for fundamental physics, and that this is so even though we do not yet claim to know precisely what that fundamental physics is.  相似文献   

14.
In a recent paper entitled “Truth does not explain predictive success” (Analysis, 2011), Carsten Held argues that the so-called “No-Miracles Argument” for scientific realism is easily refuted when the consequences of the underdetermination of theories by the evidence are taken into account. We contend that the No-Miracles Argument, when it is deployed within the context of sophisticated versions of realism, based on the notion of truthlikeness (or verisimilitude), survives Held’s criticism unscathed.  相似文献   

15.
In the Second Analogy, Kant argues that every event has a cause. It remains disputed what this conclusion amounts to. Does Kant argue only for the Weak Causal Principle that every event has some cause, or for the Strong Causal Principle that every event is produced according to a universal causal law? Existing interpretations have assumed that, by Kant’s lights, there is a substantive difference between the two. I argue that this is false. Kant holds that the concept of cause contains the notion of lawful connection, so it is analytic that causes operate according to universal laws. He is explicit about this commitment, not least in his derivation of the Categorical Imperative in Groundwork III. Consequently, Kant’s move from causal rules to universal laws is much simpler than previously assumed. Given his commitments, establishing the Strong Causal Principle requires no more argument than establishing the Weak Causal Principle.  相似文献   

16.
Over the last three decades, string theory has emerged as one of the leading hopes for a consistent theory of quantum gravity that unifies particle physics with general relativity. Despite the fact that string theory has been a thriving research program for the better part of three decades, it has been subjected to extensive criticism from a number of prominent physicists. The aim of this paper is to obtain a clearer picture of where the conflict lies in competing assessments of string theory, through a close reading of the argumentative strategies employed by protagonists on both sides. Although it has become commonplace to construe this debate as stemming from different attitudes to the absence of testable predictions, we argue that this presents an overly simplified view of the controversy, which ignores the critical role of heuristic appraisal. While string theorists and their defenders see the theoretical achievements of the string theory program as providing strong indication that it is ‘on the right track’, critics have challenged such claims, by calling into question the status of certain ‘solved problems’ and its purported ‘explanatory coherence’. The debates over string theory are therefore particularly instructive from a philosophical point of view, not only because they offer important insights into the nature of heuristic appraisal and theoretical progress, but also because they raise deep questions about what constitutes a solved problem and an explanation in fundamental physics.  相似文献   

17.
This article responds to Professor Andrew Janiak's recent attempt to defend the proposition that Isaac Newton did not believe in action at a distance between bodies (or any other kind of substance) (Janiak, 2013). His argument rests on a distinction between “three concepts of causation in Newton”, which leads him to conclude that although Newton did not believe in action at a distance between bodies, he was able to accept that gravity was a “distant action”. I critically examine Janiak's arguments here, and the historical evidence he brings to bear upon it, and argue that Professor Janiak's latest claims do nothing to undermine the view to which he is opposed, namely, that Newton did believe in the possibility of action at a distance between bodies.  相似文献   

18.
19.
We start from John Norton's analysis (1985) of the reach of Einstein's version of the principle of equivalence which is not a local principle but an extension of the relativity principle to reference frames in constant acceleration on the background of Minkowski spacetime. We examine how such a point of view implies a profound, and not generally recognised, reconsideration of the concepts of inertial system and field in physics. We then reevaluate the role that the infinitesimal principle, if adequately formulated, can legitimately be claimed to play in general relativity. We show that what we call the ‘punctual equivalence principle’ has significant physical content and that it permits the derivation of the geodesic law.  相似文献   

20.
Predictivism is the view that successful predictions of “novel” evidence carry more confirmational weight than accommodations of already known evidence. Novelty, in this context, has traditionally been conceived of as temporal novelty. However temporal predictivism has been criticized for lacking a rationale: why should the time order of theory and evidence matter? Instead, it has been proposed, novelty should be construed in terms of use-novelty, according to which evidence is novel if it was not used in the construction of a theory. Only if evidence is use-novel can it fully support the theory entailing it. As I point out in this paper, the writings of the most influential proponent of use-novelty contain a weaker and a stronger version of use-novelty. However both versions, I argue, are problematic. With regard to the appraisal of Mendeleev’ periodic table, the most contentious historical case in the predictivism debate, I argue that temporal predictivism is indeed supported, although in ways not previously appreciated. On the basis of this case, I argue for a form of so-called symptomatic predictivism according to which temporally novel predictions carry more confirmational weight only insofar as they reveal the theory’s presumed coherence of facts as real.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号