首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
I argue that we should consider Norton's material theory of induction as consisting of two largely independent claims. First, there is the claim that material facts license inductions - a claim which I interpret as a type of contextualism about induction. Second, there is the claim that there are no universal rules of induction. While a good case can be made for the first claim, I believe that Norton's arguments for the second claim are lacking. In particular, I spell out Norton's argument against the claim that all induction may be reduced to inference to the best explanation, and argue that it is not persuasive. Rejecting this part of Norton's theory does not however require us to abandon the first claim that material facts license inductions. In this way, I distinguish the parts of the material theory of induction we should happily accept from the parts about which we should be more skeptical.  相似文献   

2.
The Marburg neo-Kantians argue that Hermann von Helmholtz’s empiricist account of the a priori does not account for certain knowledge, since it is based on a psychological phenomenon, trust in the regularities of nature. They argue that Helmholtz’s account raises the ‘problem of validity’ (Gültigkeitsproblem): how to establish a warranted claim that observed regularities are based on actual relations. I reconstruct Heinrich Hertz’s and Ludwig Wittgenstein’s Bild theoretic answer to the problem of validity: that scientists and philosophers can depict the necessary a priori constraints on states of affairs in a given system, and can establish whether these relations are actual relations in nature. The analysis of necessity within a system is a lasting contribution of the Bild theory. However, Hertz and Wittgenstein argue that the logical and mathematical sentences of a Bild are rules, tools for constructing relations, and the rules themselves are meaningless outside the theory. Carnap revises the argument for validity by attempting to give semantic rules for translation between frameworks. Russell and Quine object that pragmatics better accounts for the role of a priori reasoning in translating between frameworks. The conclusion of the tale, then, is a partial vindication of Helmholtz’s original account.  相似文献   

3.
We discuss the meaning of probabilities in the many worlds interpretation of quantum mechanics. We start by presenting very briefly the many worlds theory, how the problem of probability arises, and some unsuccessful attempts to solve it in the past. Then we criticize a recent attempt by Deutsch to derive the quantum mechanical probabilities from the non-probabilistic parts of quantum mechanics and classical decision theory. We further argue that the Born probability does not make sense even as an additional probability rule in the many worlds theory. Our conclusion is that the many worlds theory fails to account for the probabilistic statements of standard (collapse) quantum mechanics.  相似文献   

4.
Over many years, Aharonov and co-authors have proposed a new interpretation of quantum mechanics: the two-time interpretation. This interpretation assigns two wavefunctions to a system, one of which propagates forwards in time and the other backwards. In this paper, I argue that this interpretation does not solve the measurement problem. In addition, I argue that it is neither necessary nor sufficient to attribute causal power to the backwards-evolving wavefunction Φ| and thus its existence should be denied, contra the two-time interpretation. Finally, I follow Vaidman in giving an epistemological reading of Φ|.  相似文献   

5.
A theory is usually said to be time reversible if whenever a sequence of states S1(t1), S2(t2), S3(t3) is possible according to that theory, then the reverse sequence of time reversed states S3T(t1), S2T(t2), S1T(t3) is also possible according to that theory; i.e., one normally not only inverts the sequence of states, but also operates on the states with a time reversal operator T. David Albert and Paul Horwich have suggested that one should not allow such time reversal operations T on states. I will argue that time reversal operations on fundamental states should be allowed. I will furthermore argue that the form that time reversal operations take is determined by the type of fundamental geometric quantities that occur in nature and that we have good reason to believe that the fundamental geometric quantities that occur in nature correspond to irreducible representations of the Lorentz transformations. Finally, I will argue that we have good reason to believe that space-time has a temporal orientation.  相似文献   

6.
In the Bayesian approach to quantum mechanics, probabilities—and thus quantum states—represent an agent's degrees of belief, rather than corresponding to objective properties of physical systems. In this paper we investigate the concept of certainty in quantum mechanics. Particularly, we show how the probability-1 predictions derived from pure quantum states highlight a fundamental difference between our Bayesian approach, on the one hand, and Copenhagen and similar interpretations on the other. We first review the main arguments for the general claim that probabilities always represent degrees of belief. We then argue that a quantum state prepared by some physical device always depends on an agent's prior beliefs, implying that the probability-1 predictions derived from that state also depend on the agent's prior beliefs. Quantum certainty is therefore always some agent's certainty. Conversely, if facts about an experimental setup could imply agent-independent certainty for a measurement outcome, as in many Copenhagen-like interpretations, that outcome would effectively correspond to a preexisting system property. The idea that measurement outcomes occurring with certainty correspond to preexisting system properties is, however, in conflict with locality. We emphasize this by giving a version of an argument of Stairs [(1983). Quantum logic, realism, and value-definiteness. Philosophy of Science, 50, 578], which applies the Kochen–Specker theorem to an entangled bipartite system.  相似文献   

7.
We all know that, nowadays, physics and philosophy are housed in separate departments on university campuses. They are distinct disciplines with their own journals and conferences, and in general they are practiced by different people, using different tools and methods. We also know that this was not always the case: up until the early 17th century (at least), physics was a part of philosophy. So what happened? And what philosophical lessons should we take away? We argue that the split took place long after Newton's Principia (rather than before, as many standard accounts would have it), and offer a new account of the philosophical reasons that drove the separation. We argue that one particular problem, dating back to Descartes and persisting long into the 18th century, played a pivotal role. The failure to solve it, despite repeated efforts, precipitates a profound change in the relationship between physics and philosophy. The culprit is the problem of collisions. Innocuous though it may seem, this problem becomes the bellwether of deeper issues concerning the nature and properties of bodies in general. The failure to successfully address the problem led to a reconceptualization of the goals and subject-matter of physics, a change in the relationship between physics and mechanics, and a shift in who had authority over the most fundamental issues in physics.  相似文献   

8.
Constitutive mechanistic explanations are said to refer to mechanisms that constitute the phenomenon-to-be-explained. The most prominent approach of how to understand this relation is Carl Craver's mutual manipulability approach (MM) to constitutive relevance. Recently, MM has come under attack (Baumgartner and Casini 2017; Baumgartner and Gebharter 2015; Harinen 2014; Kästner 2017; Leuridan 2012; Romero 2015). It is argued that MM is inconsistent because, roughly, it is spelled out in terms of interventionism (which is an approach to causation), whereas constitutive relevance is said to be a non-causal relation. In this paper, I will discuss a strategy of how to resolve this inconsistency—so-called fat-handedness approaches (Baumgartner and Casini 2017; Baumgartner and Gebharter 2015; Romero 2015). I will argue that these approaches are problematic. I will present a novel suggestion for how to consistently define constitutive relevance in terms of interventionism. My approach is based on a causal interpretation of manipulability in terms of causal relations between the mechanism's components and what I will call temporal EIO-parts of the phenomenon. Still, this interpretation accounts for the fundamental difference between constitutive relevance and causal relevance.  相似文献   

9.
The development of nineteenth-century geodetic measurement challenges the dominant coherentist account of metric success. Coherentists argue that measurements of a parameter are successful if their numerical outcomes convergence across varying contextual constraints. Aiming at numerical convergence, in turn, offers an operational aim for scientists to solve problems of coordination. Geodesists faced such a problem of coordination between two indicators of the earth's polar flattening, which were both based on imperfect ellipsoid models. While not achieving numerical convergence, their measurements produced novel data that grounded valuable theoretical hypotheses. Consequently, they ought to be regarded as epistemically successful. This insight warrants a dynamic revision of coherentism, which allows to judge the success of a metric based on both its coherence and fruitfulness. On that view, scientific measurement aims to coordinate theoretical definitions and produce novel data and theoretical insights.  相似文献   

10.
Many philosophers contend that Turing’s work provides a conceptual analysis of numerical computability. In (Rescorla, 2007), I dissented. I argued that the problem of deviant notations stymies existing attempts at conceptual analysis. Copeland and Proudfoot respond to my critique. I argue that their putative solution does not succeed. We are still awaiting a genuine conceptual analysis.  相似文献   

11.
We distinguish two orientations in Weyl's analysis of the fundamental role played by the notion of symmetry in physics, namely an orientation inspired by Klein's Erlangen program and a phenomenological-transcendental orientation. By privileging the former to the detriment of the latter, we sketch a group(oid)-theoretical program—that we call the Klein-Weyl program—for the interpretation of both gauge theories and quantum mechanics in a single conceptual framework. This program is based on Weyl's notion of a “structure-endowed entity” equipped with a “group of automorphisms”. First, we analyze what Weyl calls the “problem of relativity” in the frameworks provided by special relativity, general relativity, and Yang-Mills theories. We argue that both general relativity and Yang-Mills theories can be understood in terms of a localization of Klein's Erlangen program: while the latter describes the group-theoretical automorphisms of a single structure (such as homogenous geometries), local gauge symmetries and the corresponding gauge fields (Ehresmann connections) can be naturally understood in terms of the groupoid-theoretical isomorphisms in a family of identical structures. Second, we argue that quantum mechanics can be understood in terms of a linearization of Klein's Erlangen program. This stance leads us to an interpretation of the fact that quantum numbers are “indices characterizing representations of groups” ((Weyl, 1931a), p.xxi) in terms of a correspondence between the ontological categories of identity and determinateness.  相似文献   

12.
The computational theory of mind construes the mind as an information-processor and cognitive capacities as essentially representational capacities. Proponents of the view (hereafter, ‘computationalists’) claim a central role for representational content in computational models of these capacities. In this paper I argue that the standard view of the role of representational content in computational models is mistaken; I argue that representational content is to be understood as a gloss on the computational characterization of a cognitive process.  相似文献   

13.
I argue for an interpretation of the connection between Descartes’ early mathematics and metaphysics that centers on the standard of geometrical intelligibility that characterizes Descartes’ mathematical work during the period 1619 to 1637. This approach remains sensitive to the innovations of Descartes’ system of geometry and, I claim, sheds important light on the relationship between his landmark Geometry (1637) and his first metaphysics of nature, which is presented in Le monde (1633). In particular, I argue that the same standard of clear and distinct motions for construction that allows Descartes to distinguish ‘geometric’ from ‘imaginary’ curves in the domain of mathematics is adopted in Le monde as Descartes details God’s construction of nature. I also show how, on this interpretation, the metaphysics of Le monde can fruitfully be brought to bear on Descartes’ attempted solution to the Pappus problem, which he presents in Book I of the Geometry. My general goal is to show that attention to the standard of intelligibility Descartes invokes in these different areas of inquiry grants us a richer view of the connection between his early mathematics and philosophy than an approach that assumes a common method is what binds his work in these domains together.  相似文献   

14.
In the area of social science, in particular, although we have developed methods for reliably discovering the existence of causal relationships, we are not very good at using these to design effective social policy. Cartwright argues that in order to improve our ability to use causal relationships, it is essential to develop a theory of causation that makes explicit the connections between the nature of causation, our best methods for discovering causal relationships, and the uses to which these are put. I argue that Woodward's interventionist theory of causation is uniquely suited to meet Cartwright's challenge. More specifically, interventionist mechanisms can provide the bridge from ‘hunting causes’ to ‘using them’, if interventionists (i) tell us more about the nature of these mechanisms, and (ii) endorse the claim that it is these mechanisms—or whatever constitutes them—that make causal claims true. I illustrate how having an understanding of interventionist mechanisms can allow us to put causal knowledge to use via a detailed example from organic chemistry.  相似文献   

15.
16.
This paper presents a survey of the literature on the problem of contingency in science. The survey is structured around three challenges faced by current attempts at understanding the conflict between “contingentist” and “inevitabilist” interpretations of scientific knowledge and practice. First, the challenge of definition: it proves hard to define the positions that are at stake in a way that is both conceptually rigorous and does justice to the plethora of views on the issue. Second, the challenge of distinction: some features of the debate suggest that the contingency issue may not be sufficiently distinct from other philosophical debates to constitute a genuine, independent philosophical problem. And third, the challenge of decidability: it remains unclear whether and how the conflict could be settled on the basis of empirical evidence from the actual history of science. The paper argues that in order to make progress in the present debate, we need to distinguish more systematically between different expressions that claims about contingency and inevitability in science can take. To this end, it introduces a taxonomy of different contingency and inevitability claims. The taxonomy has the structure of an ordered quadruple. Each contingency and each inevitability claim contains an answer to the following four questions: (how) are alternatives to current science possible, what types of alternatives are we talking about, how should the alternatives be assessed, and how different are they from actual science?  相似文献   

17.
Computer simulations are involved in numerous branches of modern science, and science would not be the same without them. Yet the question of how they can explain real-world processes remains an issue of considerable debate. In this context, a range of authors have highlighted the inferences back to the world that computer simulations allow us to draw. I will first characterize the precise relation between computer and target of a simulation that allows us to draw such inferences. I then argue that in a range of scientifically interesting cases they are particular abductions and defend this claim by appeal to two case studies.  相似文献   

18.
Philosophical work on values in science is held back by widespread ambiguity about how values bear on scientific choices. Here, I disambiguate several ways in which a choice can be value-laden and show that this disambiguation has the potential to solve and dissolve philosophical problems about values in science. First, I characterize four ways in which values relate to choices: values can motivate, justify, cause, or be impacted by the choices we make. Next, I put my proposed taxonomy to work, using it to clarify one version of the argument from inductive risk. The claim that non-epistemic values must play a role in scientific choices that run inductive risk makes most sense as a claim about values being needed to justify such choices. The argument from inductive risk is not unique: many philosophical arguments about values in science can be more clearly understood and assessed by paying close attention to how values and choices are related.  相似文献   

19.
In 1751, LEONHARD EULER established “harmony” between two principles that had been stated by PIERRE-LOUIS-MOREAU DE MAUPERTUIS a few years earlier. These principles are intended to be the foundations of Mechanics; they are the principle of rest and the principle of least action. My claim is that the way in which “harmony” is achieved sets the foundations of so called Analytical Mechanics: it discloses the physical bases of the general ideas, concepts, and motivations of the formalism. My paper intends to show what those physical bases are, and how a picture of the formalism issues from them. This picture is shown to be recast in JOSEPH-LOUIS LAGRANGES justification of the formalism, which strengthens my claim.  相似文献   

20.
Difficulties over probability have often been considered fatal to the Everett interpretation of quantum mechanics. Here I argue that the Everettian can have everything she needs from ‘probability’ without recourse to indeterminism, ignorance, primitive identity over time or subjective uncertainty: all she needs is a particular rationality principle.The decision-theoretic approach recently developed by Deutsch and Wallace claims to provide just such a principle. But, according to Wallace, decision theory is itself applicable only if the correct attitude to a future Everettian measurement outcome is subjective uncertainty. I argue that subjective uncertainty is not available to the Everettian, but I offer an alternative: we can justify the Everettian application of decision theory on the basis that an Everettian should care about all her future branches. The probabilities appearing in the decision-theoretic representation theorem can then be interpreted as the degrees to which the rational agent cares about each future branch. This reinterpretation, however, reduces the intuitive plausibility of one of the Deutsch–Wallace axioms (measurement neutrality).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号