首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
As it is standardly conceived, Inference to the Best Explanation (IBE) is a form of ampliative inference in which one infers a hypothesis because it provides a better potential explanation of one's evidence than any other available, competing explanatory hypothesis. Bas van Fraassen famously objected to IBE thus formulated that we may have no reason to think that any of the available, competing explanatory hypotheses are true. While revisionary responses to the Bad Lot Objection concede that IBE needs to be reformulated in light of this problem, reactionary responses argue that the Bad Lot Objection is fallacious, incoherent, or misguided. This paper shows that the most influential reactionary responses to the Bad Lot Objection do nothing to undermine the original objection. This strongly suggests that proponents of IBE should focus their efforts on revisionary responses, i.e. on finding a more sophisticated characterization of IBE for which the Bad Lot Objection loses its bite.  相似文献   

2.
In this paper we offer a formal-logical analysis of the famous reversibility objection against the Second Law of thermodynamics. We reconstruct the objection as a deductive argument leading to a contradiction, employing resources of standard quantified modal logic and thereby highlighting explicit and implicit assumptions with respect to possibility, identity, and their interaction. We then describe an alternative framework, case-intensional first order logic, that has greater expressive resources than standard quantified modal logic. We show that in that framework we can account for the role of sortals in possibility judgments. This allows us to formalize the relevant truths involved in the reversibility objection in such a way that no contradiction ensues. We claim that this analysis helps to understand in which way the Second Law is, specifically, a law of thermodynamics, but not of systems of particles in general.  相似文献   

3.
I argue that the Oxford school Everett interpretation is internally incoherent, because we cannot claim that in an Everettian universe the kinds of reasoning we have used to arrive at our beliefs about quantum mechanics would lead us to form true beliefs. I show that in an Everettian context, the experimental evidence that we have available could not provide empirical confirmation for quantum mechanics, and moreover that we would not even be able to establish reference to the theoretical entities of quantum mechanics. I then consider a range of existing Everettian approaches to the probability problem and show that they do not succeed in overcoming this incoherence.  相似文献   

4.
In this paper I consider the objection that the Enhanced Indispensability Argument (EIA) is circular and hence fails to support mathematical platonism. The objection is that the explanandum in any mathematical explanation of a physical phenomenon is itself identified using mathematical concepts. Hence the explanandum is only genuine if the truth of some mathematical theory is already presupposed. I argue that this objection deserves to be taken seriously, that it does sometimes undermine support for EIA, but that there is no reason to think that circularity is an unavoidable feature of mathematical explanation in science.  相似文献   

5.
Predicting the future evolution of GDP growth and inflation is a central concern in economics. Forecasts are typically produced either from economic theory‐based models or from simple linear time series models. While a time series model can provide a reasonable benchmark to evaluate the value added of economic theory relative to the pure explanatory power of the past behavior of the variable, recent developments in time series analysis suggest that more sophisticated time series models could provide more serious benchmarks for economic models. In this paper we evaluate whether these complicated time series models can outperform standard linear models for forecasting GDP growth and inflation. We consider a large variety of models and evaluation criteria, using a bootstrap algorithm to evaluate the statistical significance of our results. Our main conclusion is that in general linear time series models can hardly be beaten if they are carefully specified. However, we also identify some important cases where the adoption of a more complicated benchmark can alter the conclusions of economic analyses about the driving forces of GDP growth and inflation. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
Surveys collecting data on consumer attitudes and buying intentions have been performed in Sweden since 1973. This paper examines the usefulness of these data as quick indicators of the development of household expenditures on automobiles. In the evaluation we are considering the explanatory power as well as the prediction accuracy. It turns out that the best single indicator is among the plan indices. However, an indicator based on car registration statistics is found to be at least as good. By combining plan/attitude indices with car registrations our study shows that considerable improvements can be obtained.  相似文献   

7.
Michel Janssen and Harvey Brown have driven a prominent recent debate concerning the direction of an alleged arrow of explanation between Minkowski spacetime and Lorentz invariance of dynamical laws in special relativity. In this article, I critically assess this controversy with the aim of clarifying the explanatory foundations of the theory. First, I show that two assumptions shared by the parties—that the dispute is independent of issues concerning spacetime ontology, and that there is an urgent need for a constructive interpretation of special relativity—are problematic and negatively affect the debate. Second, I argue that the whole discussion relies on a misleading conception of the link between Minkowski spacetime structure and Lorentz invariance, a misconception that in turn sheds more shadows than light on our understanding of the explanatory nature and power of Einstein׳s theory. I state that the arrow connecting Lorentz invariance and Minkowski spacetime is not explanatory and unidirectional, but analytic and bidirectional, and that this analytic arrow grounds the chronogeometric explanations of physical phenomena that special relativity offers.  相似文献   

8.
In this second paper, I continue my discussion of the problem of reference for scientific realism. First, I consider a final objection to Kitcher’s account of reference, which I generalise to other accounts of reference. Such accounts make attributions of reference by appeal to our pretheoretical intuitions about how true statements ought to be distibuted among the scientific utterances of the past. I argue that in the cases that merit discussion, this strategy fails because our intuitions are unstable. The interesting cases are importantly borderline—it really isn’t clear what we ought to say about how those terms referred. I conclude that in many relevant cases, our grounds for thinking that the theoretical terms of the past referred are matched by our grounds for thinking that they failed to refer, in such a way that deciding on either result is arbitrary and bad news for the realist. In response to this problem, in the second part of the paper I expand upon Field’s (1973) account of partial reference to sketch a new way of thinking about the theoretical terms of the past—that they partially referred and partially failed to refer.  相似文献   

9.
I present an account of classical genetics to challenge theory-biased approaches in the philosophy of science. Philosophers typically assume that scientific knowledge is ultimately structured by explanatory reasoning and that research programs in well-established sciences are organized around efforts to fill out a central theory and extend its explanatory range. In the case of classical genetics, philosophers assume that the knowledge was structured by T. H. Morgan’s theory of transmission and that research throughout the later 1920s, 30s, and 40s was organized around efforts to further validate, develop, and extend this theory. I show that classical genetics was structured by an integration of explanatory reasoning (associated with the transmission theory) and investigative strategies (such as the ‘genetic approach’). The investigative strategies, which have been overlooked in historical and philosophical accounts, were as important as the so-called laws of Mendelian genetics. By the later 1920s, geneticists of the Morgan school were no longer organizing research around the goal of explaining inheritance patterns; rather, they were using genetics to investigate a range of biological phenomena that extended well beyond the explanatory domain of transmission theories. Theory-biased approaches in history and philosophy of science fail to reveal the overall structure of scientific knowledge and obscure the way it functions.  相似文献   

10.
What realization is has been convincingly presented in relation to the way we determine what counts as the realizers of realized properties. The way we explain a fact of realization includes a reference to what realization should be; therefore it informs in turn our understanding of the nature of realization. Conceptions of explanation are thereby included in the views of realization as a metaphysical property.Recently, several major views of realization such as Polger and Shapiro's or Gillett and Aizawa's, however competing, have relied on the neo-mechanicist theory of explanations (e.g,. Darden and Caver 2013), currently popular among philosophers of science. However, it has also been increasingly argued that some explanations are not mechanistic (e.g., Batterman 2009).Using an account given in Huneman (2017), I argue that within those explanations the fact that some mathematical properties are instantiated is explanatory, and that this defines a specific explanatory type called “structural explanation”, whose subtypes could be: optimality explanations (usually found in economics), topological explanations, etc. This paper thereby argues that all subtypes of structural explanation define several kinds of realizability, which are not equivalent to the usual notion of realization tied to mechanistic explanations, onto which many of the philosophical investigations are focused. Then it draws some consequences concerning the notion of multiple realizability.  相似文献   

11.
There is growing evidence that explanatory considerations influence how people change their degrees of belief in light of new information. Recent studies indicate that this influence is systematic and may result from people’s following a probabilistic update rule. While formally very similar to Bayes’ rule, the rule or rules people appear to follow are different from, and inconsistent with, that better-known update rule. This raises the question of the normative status of those updating procedures. Is the role explanation plays in people’s updating their degrees of belief a bias? Or are people right to update on the basis of explanatory considerations, in that this offers benefits that could not be had otherwise? Various philosophers have argued that any reasoning at deviance with Bayesian principles is to be rejected, and so explanatory reasoning, insofar as it deviates from Bayes’ rule, can only be fallacious. We challenge this claim by showing how the kind of explanation-based update rules to which people seem to adhere make it easier to strike the best balance between being fast learners and being accurate learners. Borrowing from the literature on ecological rationality, we argue that what counts as the best balance is intrinsically context-sensitive, and that a main advantage of explanatory update rules is that, unlike Bayes’ rule, they have an adjustable parameter which can be fine-tuned per context. The main methodology to be used is agent-based optimization, which also allows us to take an evolutionary perspective on explanatory reasoning.  相似文献   

12.
In this paper, we develop and refine the idea that understanding is a species of explanatory knowledge. Specifically, we defend the idea that S understands why p if and only if S knows that p, and, for some q, Ss true belief that q correctly explains p is produced/maintained by reliable explanatory evaluation. We then show how this model explains the reception of James Bjorken’s explanation of scaling by the broader physics community in the late 1960s and early 1970s. The historical episode is interesting because Bjorken’s explanation initially did not provide understanding to other physicists, but was subsequently deemed intelligible when Feynman provided a physical interpretation that led to experimental tests that vindicated Bjorken’s model. Finally, we argue that other philosophical models of scientific understanding are best construed as limiting cases of our more general model.  相似文献   

13.
Approaches to the Internalism–Externalism controversy in the philosophy of mind often involve both (broadly) metaphysical and explanatory considerations. Whereas originally most emphasis seems to have been placed on metaphysical concerns, recently the explanation angle is getting more attention. Explanatory considerations promise to offer more neutral grounds for cognitive systems demarcation than (broadly) metaphysical ones. However, it has been argued that explanation-based approaches are incapable of determining the plausibility of internalist-based conceptions of cognition vis-à-vis externalist ones. On this perspective, improved metaphysics is the route along which to solve the Internalist–Externalist stalemate. In this paper we challenge this claim. Although we agree that explanation-orientated approaches have indeed so far failed to deliver solid means for cognitive system demarcation, we elaborate a more promising explanation-oriented framework to address this issue. We argue that the mutual manipulability account of constitutive relevance in mechanisms, extended with the criterion of ‘fat-handedness’, is capable of plausibly addressing the cognitive systems demarcation problem, and thus able to decide on the explanatory traction of Internalist vs. Externalist conceptions, on a case-by-case basis. Our analysis also highlights why some other recent mechanistic takes on the problem of cognitive systems demarcation have been unsuccessful. We illustrate our claims with a case on gestures and learning.  相似文献   

14.
In this paper, I explore Rosen’s (1994) ‘transcendental’ objection to constructive empiricism—the argument that in order to be a constructive empiricist, one must be ontologically committed to just the sort of abstract, mathematical objects constructive empiricism seems committed to denying. In particular, I assess Bueno’s (1999) ‘partial structures’ response to Rosen, and argue that such a strategy cannot succeed, on the grounds that it cannot provide an adequate metalogic for our scientific discourse. I conclude by arguing that this result provides some interesting consequences in general for anti-realist programmes in the philosophy of science.  相似文献   

15.
We defend the many-worlds interpretation of quantum mechanics (MWI) against the objection that it cannot explain why measurement outcomes are predicted by the Born probability rule. We understand quantum probabilities in terms of an observer's self-location probabilities. We formulate a probability postulate for the MWI: the probability of self-location in a world with a given set of outcomes is the absolute square of that world's amplitude. We provide a proof of this postulate, which assumes the quantum formalism and two principles concerning symmetry and locality. We also show how a structurally similar proof of the Born rule is available for collapse theories. We conclude by comparing our account to the recent account offered by Sebens and Carroll.  相似文献   

16.
In this paper I argue that the Strong Programme’s aim to provide robust explanations of belief acquisition is limited by its commitment to the symmetry principle. For Bloor and Barnes, the symmetry principle is intended to drive home the fact that epistemic norms are socially constituted. My argument here is that even if our epistemic standards are fully naturalized—even relativized—they nevertheless can play a pivotal role in why individuals adopt the beliefs that they do. Indeed, sometimes the fact that a belief is locally endorsed as rational is the only reason why an individual holds it. In this way, norms of rationality have a powerful and unique role in belief formation. But if this is true then the symmetry principle’s emphasis on ‘sameness of type’ is misguided. It has the undesirable effect of not just naturalizing our cognitive commitments, but trivializing them. Indeed, if the notion of ‘similarity’ is to have any content, then we are not going to classify as ‘the same’ beliefs that are formed in accordance with deeply entrenched epistemic norms as ones formed without reflection on these norms, or ones formed in spite of these norms. My suggestion here is that we give up the symmetry principle in favor of a more sophisticated principle, one that allows for a taxonomy of causes rich enough to allow us to delineate the unique impact epistemic norms have on those individuals who subscribe to them.  相似文献   

17.
It is frequently said that belief aims at truth, in an explicitly normative sense—that is, that one ought to believe the proposition that p if, and only if, p is true. This truth norm is frequently invoked to explain why we should seek evidential justification in our beliefs, or why we should try to be rational in our belief formation—it is because we ought to believe the truth that we ought to follow the evidence in belief revision. In this paper, I argue that this view is untenable. The truth norm clashes with plausible evidential norms in a wide range of cases, such as when we have excellent but misleading evidence for a falsehood or no evidence for a truth. I will consider various ways to resolve this conflict and argue that none of them work. However, I will ultimately attempt to vindicate the love of truth, by arguing that knowledge is the proper epistemic goal. The upshot is that we should not aim merely to believe the truth; we should aim to know it.  相似文献   

18.
Contemporary scholars set the Greek conception of an immanent natural order in opposition to the seventeenth century mechanistic conception of extrinsic laws imposed upon nature from without. By contrast, we argue that in the process of making the concept of law of nature, forms and laws were coherently used in theories of natural causation. We submit that such a combination can be found in the thirteenth century. The heroes of our claim are Robert Grosseteste who turned the idea of corporeal form into the common feature of matter, and Roger Bacon who described the effects of that common feature. Bacon detached the explanatory principle from matter and rendered it independent and therefore external to natural substances. Our plausibility argument, anchored in close reading of the relevant texts, facilitates a coherent conception of both ‘natures’ and ‘laws’.  相似文献   

19.
In this paper we introduce the overlapping design consensus for the construction of models in design and the related value judgments. The overlapping design consensus is inspired by Rawls’ overlapping consensus. The overlapping design consensus is a well-informed, mutual agreement among all stakeholders based on fairness. Fairness is respected if all stakeholders’ interests are given due and equal attention. For reaching such fair agreement, we apply Rawls’ original position and reflective equilibrium to modeling. We argue that by striving for the original position, stakeholders expel invalid arguments, hierarchies, unwarranted beliefs, and bargaining effects from influencing the consensus. The reflective equilibrium requires that stakeholders’ beliefs cohere with the final agreement and its justification. Therefore, the overlapping design consensus is not only an agreement to decisions, as most other stakeholder approaches, it is also an agreement to their justification and that this justification is consistent with each stakeholders’ beliefs. For supporting fairness, we argue that fairness qualifies as a maxim in modeling. We furthermore distinguish values embedded in a model from values that are implied by its context of application. Finally, we conclude that for reaching an overlapping design consensus communication about properties of and values related to a model is required.  相似文献   

20.
In the Bayesian approach to quantum mechanics, probabilities—and thus quantum states—represent an agent's degrees of belief, rather than corresponding to objective properties of physical systems. In this paper we investigate the concept of certainty in quantum mechanics. Particularly, we show how the probability-1 predictions derived from pure quantum states highlight a fundamental difference between our Bayesian approach, on the one hand, and Copenhagen and similar interpretations on the other. We first review the main arguments for the general claim that probabilities always represent degrees of belief. We then argue that a quantum state prepared by some physical device always depends on an agent's prior beliefs, implying that the probability-1 predictions derived from that state also depend on the agent's prior beliefs. Quantum certainty is therefore always some agent's certainty. Conversely, if facts about an experimental setup could imply agent-independent certainty for a measurement outcome, as in many Copenhagen-like interpretations, that outcome would effectively correspond to a preexisting system property. The idea that measurement outcomes occurring with certainty correspond to preexisting system properties is, however, in conflict with locality. We emphasize this by giving a version of an argument of Stairs [(1983). Quantum logic, realism, and value-definiteness. Philosophy of Science, 50, 578], which applies the Kochen–Specker theorem to an entangled bipartite system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号