首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The incommensurability of two theories seems to problematize theory comparisons, which allow for the selection of the better of the two theories. If so, it becomes puzzling how the quality of theories can improve with time, i.e. how science can progress across changes in incommensurable theories. I argue that in papers published in the 1990s, Kuhn provided a novel way to resolve this apparent tension between incommensurability and scientific progress. He put forward an account of their compatibility which worked not by downplaying the negative consequences of incommensurability but instead by allowing them to reach their natural end: a process of specialisation. This development in Kuhn’s thought has yet to be properly recorded but it is also interesting in its own right. It shows how a robust version of incommensurability—one which really does have severe negative consequences for scientists’ capacity to perform comparative evaluations of incommensurable theories—need make no puzzle of the progress of science.  相似文献   

2.
The critics of rational choice theory (RCT) frequently build on the contrast between so-called thick and thin applications of RCT to argue that thin RCT lacks the potential to explain the choices of real-world agents. In this paper, I draw on often-cited RCT applications in several decision sciences to demonstrate that despite this prominent critique there are at least two different senses in which thin RCT can explain real-world agents’ choices. I then defend this thesis against the most influential objections put forward by the critics of RCT. In doing so, I explicate the implications of my thesis for the ongoing philosophical debate concerning the explanatory potential of RCT and the comparative merits of widely endorsed accounts of explanation.  相似文献   

3.
Scientists often diverge widely when choosing between research programs. This can seem to be rooted in disagreements about which of several theories, competing to address shared questions or phenomena, is currently the most epistemically or explanatorily valuable—i.e. most successful. But many such cases are actually more directly rooted in differing judgments of pursuit-worthiness, concerning which theory will be best down the line, or which addresses the most significant data or questions. Using case studies from 16th-century astronomy and 20th-century geology and biology, I argue that divergent theory choice is thus often driven by considerations of scientific process, even where direct epistemic or explanatory evaluation of its final products appears more relevant. Broadly following Kuhn's analysis of theoretical virtues, I suggest that widely shared criteria for pursuit-worthiness function as imprecise, mutually-conflicting values. However, even Kuhn and others sensitive to pragmatic dimensions of theory ‘acceptance’, including the virtue of fruitfulness, still commonly understate the role of pursuit-worthiness—especially by exaggerating the impact of more present-oriented virtues, or failing to stress how ‘competing’ theories excel at addressing different questions or data. This framework clarifies the nature of the choice and competition involved in theory choice, and the role of alternative theoretical virtues.  相似文献   

4.
In recent papers, Zurek [(2005). Probabilities from entanglement, Born's rule pk=|ψk|2 from entanglement. Physical Review A, 71, 052105] has objected to the decision-theoretic approach of Deutsch [(1999) Quantum theory of probability and decisions. Proceedings of the Royal Society of London A, 455, 3129–3137] and Wallace [(2003). Everettian rationality: defending Deutsch's approach to probability in the Everett interpretation. Studies in History and Philosophy of Modern Physics, 34, 415–438] to deriving the Born rule for quantum probabilities on the grounds that it courts circularity. Deutsch and Wallace assume that the many worlds theory is true and that decoherence gives rise to a preferred basis. However, decoherence arguments use the reduced density matrix, which relies upon the partial trace and hence upon the Born rule for its validity. Using the Heisenberg picture and quantum Darwinism—the notion that classical information is quantum information that can proliferate in the environment pioneered in Ollivier et al. [(2004). Objective properties from subjective quantum states: Environment as a witness. Physical Review Letters, 93, 220401 and (2005). Environment as a witness: Selective proliferation of information and emergence of objectivity in a quantum universe. Physical Review A, 72, 042113]—I show that measurement interactions between two systems only create correlations between a specific set of commuting observables of system 1 and a specific set of commuting observables of system 2. This argument picks out a unique basis in which information flows in the correlations between those sets of commuting observables. I then derive the Born rule for both pure and mixed states and answer some other criticisms of the decision theoretic approach to quantum probability.  相似文献   

5.
Claims that the standard procedure for testing scientific theories is inapplicable to Everettian quantum theory, and hence that the theory is untestable, are due to misconceptions about probability and about the logic of experimental testing. Refuting those claims by correcting those misconceptions leads to an improved theory of scientific methodology (based on Popper׳s) and testing, which allows various simplifications, notably the elimination of everything probabilistic from the methodology (‘Bayesian’ credences) and from fundamental physics (stochastic processes).  相似文献   

6.
A question at the intersection of scientific modeling and public choice is how to deal with uncertainty about model predictions. This “high-level” uncertainty is necessarily value-laden, and thus must be treated as irreducibly subjective. Nevertheless, formal methods of uncertainty analysis should still be employed for the purpose of clarifying policy debates. I argue that such debates are best informed by models which integrate objective features (which model the world) with subjective ones (modeling the policy-maker). This integrated subjectivism is illustrated with a case study from the literature on monetary policy. The paper concludes with some morals for the use of models in determining climate policy.  相似文献   

7.
I argue that the Oxford school Everett interpretation is internally incoherent, because we cannot claim that in an Everettian universe the kinds of reasoning we have used to arrive at our beliefs about quantum mechanics would lead us to form true beliefs. I show that in an Everettian context, the experimental evidence that we have available could not provide empirical confirmation for quantum mechanics, and moreover that we would not even be able to establish reference to the theoretical entities of quantum mechanics. I then consider a range of existing Everettian approaches to the probability problem and show that they do not succeed in overcoming this incoherence.  相似文献   

8.
String theory has been the dominating research field in theoretical physics during the last decades. Despite the considerable time elapse, no new testable predictions have been derived by string theorists and it is understandable that doubts have been voiced. Some people have argued that it is time to give up since testability is wanting. But the majority has not been convinced and they continue to believe that string theory is the right way to go. This situation is interesting for philosophy of science since it highlights several of our central issues. In this paper we will discuss string theory from a number of different perspectives in general methodology. We will also relate the realism/antirealism debate to the current status of string theory. Our goal is two-fold; both to take a look at string theory from philosophical perspectives and to use string theory as a test case for some philosophical issues.  相似文献   

9.
The study of brand choice decisions with multiple alternatives has been successfully modelled for more than a decade using the Multinomial Logit model. Recently, neural network modelling has received increasing attention and has been applied to an array of marketing problems such as market response or segmentation. We show that a Feedforward Neural Network with Softmax output units and shared weights can be viewed as a generalization of the Multinomial Logit model. The main difference between the two approaches lies in the ability of neural networks to model non‐linear preferences with few (if any) a priori assumptions about the nature of the underlying utility function, while the Multinomial Logit can suffer from a specification bias. Being complementary, these approaches are combined into a single framework. The neural network is used as a diagnostic and specification tool for the Logit model, which will provide interpretable coefficients and significance statistics. The method is illustrated on an artificial dataset where the market is heterogeneous. We then apply the approach to panel scanner data of purchase records, using the Logit to analyse the non‐linearities detected by the neural network. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

10.
This paper reviews, extends and applies alternative normative decision models for the assessment of the value of forecast information, concentrating primarily on the factors influencing the tractability of assessment and interpretation in specific decision problems. As an empirical illustration of t lhe models, the paper presents a valuation analysis of the published judgmental price forecasts of a veteran analyst of the hog market to a perfectly competitive farm enterprise.  相似文献   

11.
12.
This paper situates the metaphysical antinomy between chance and determinism in the historical context of some of the earliest developments in the mathematical theory of probability. Since Hacking's seminal work on the subject, it has been a widely held view that the classical theorists of probability were guilty of an unwitting equivocation between a subjective, or epistemic, interpretation of probability, on the one hand, and an objective, or statistical, interpretation, on the other. While there is some truth to this account, I argue that the tension at the heart of the classical theory of probability is not best understood in terms of the duality between subjective and objective interpretations of probability. Rather, the apparent paradox of chance and determinism, when viewed through the lens of the classical theory of probability, manifests itself in a much deeper ambivalence on the part of the classical probabilists as to the rational commensurability of causal and probabilistic reasoning.  相似文献   

13.
Summary Both male and femaleOrchesella cincta (Collembola) were able to discriminate between spermatophores of different origin. Females chose spermatophores deposited by closely related males while males preferentially destroyed spermatophores of other males.  相似文献   

14.
While forecasting involves forward/predictive thinking, it depends crucially on prior diagnosis for suggesting a model of the phenomenon, for defining‘relevant’variables, and for evaluating forecast accuracy via the model. The nature of diagnostic thinking is examined with respect to these activities. We first consider the difficulties of evaluating forecast accuracy without a causal model of what generates outcomes. We then discuss the development of models by considering how attention is directed to variables via analogy and metaphor as well as by what is unusual or abnormal. The causal relevance of variables is then assessed by reference to probabilistic signs called‘cues to causality’. These are: temporal order, constant conjunction, contiguity in time and space, number of alternative explanations, similarity, predictive validity, and robustness. The probabilistic nature of the cues is emphasized by discussing the concept of spurious correlation and how causation does not necessarily imply correlation. Implications for improving forecasting are considered with respect to the above issues.  相似文献   

15.
The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic.This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.  相似文献   

16.
The development of evolutionary game theory (EGT) is closely linked with two interdisciplinary exchanges: the import of game theory into biology, and the import of biologists’ version of game theory into economics. This paper traces the history of these two import episodes. In each case the investigation covers what exactly was imported, what the motives for the import were, how the imported elements were put to use, and how they related to existing practices in the respective disciplines. Two conclusions emerged from this study. First, concepts derived from the unity of science discussion or the unification accounts of explanation are too strong and too narrow to be useful for analysing these interdisciplinary exchanges. Secondly, biology and economics—at least in relation to EGT—show significant differences in modelling practices: biologists seek to link EGT models to concrete empirical situations, whereas economists pursue conceptual exploration and possible explanation.  相似文献   

17.
The paper challenges a recent attempt by Jouni-Matti Kuukkanen to show that since Thomas Kuhn’s philosophical standpoint can be incorporated into coherentist epistemology, it does not necessarily lead to: (Thesis 1) an abandonment of rationality and rational interparadigm theory comparison, nor to (Thesis 2) an abandonment of convergent realism. Leaving aside the interpretation of Kuhn as a coherentist, we will show that Kuukkanen’s first thesis is not sufficiently explicated, while the second one entirely fails. With regard to Thesis 1, we argue that Kuhn’s view on inter-paradigm theory comparison allows only for (what we shall dub as) ‘the weak notion of rationality’, and that Kuukkanen’s argument is thus acceptable only in view of such a notion. With regard to Thesis 2, we show that even if we interpret Kuhn as a coherentist, his philosophical standpoint cannot be seen as compatible with convergent realism since Kuhn’s argument against it is not ‘ultimately empirical’, as Kuukkanen takes it to be.  相似文献   

18.
In the last decade much has been made of the role that models play in the epistemology of measurement. Specifically, philosophers have been interested in the role of models in producing measurement outcomes. This discussion has proceeded largely within the context of the physical sciences, with notable exceptions considering measurement in economics. However, models also play a central role in the methods used to develop instruments that purport to quantify psychological phenomena. These methods fall under the umbrella term ‘psychometrics’. In this paper, we focus on Clinical Outcome Assessments (COAs) and discuss two measurement theories and their associated models: Classical Test Theory (CTT) and Rasch Measurement Theory. We argue that models have an important role to play in coordinating theoretical terms with empirical content, but to do so they must serve: 1) as a representation of the measurement interaction; and 2) in conjunction with a theory of the attribute in which we are interested. We conclude that Rasch Measurement Theory is a more promising approach than CTT in these regards despite the latter's popularity with health outcomes researchers.  相似文献   

19.
MicroRNAs (miRNAs) are natural, single-stranded, small RNA molecules which subtly control gene expression. Several studies indicate that specific miRNAs can regulate heart function both in development and disease. Despite prevention programs and new therapeutic agents, cardiovascular disease remains the main cause of death in developed countries. The elevated number of heart failure episodes is mostly due to myocardial infarction (MI). An increasing number of studies have been carried out reporting changes in miRNAs gene expression and exploring their role in MI and heart failure. In this review, we furnish a critical analysis of where the frontier of knowledge has arrived in the fields of basic and translational research on miRNAs in cardiac ischemia. We first summarize the basal information on miRNA biology and regulation, especially concentrating on the feedback loops which control cardiac-enriched miRNAs. A focus on the role of miRNAs in the pathogenesis of myocardial ischemia and in the attenuation of injury is presented. Particular attention is given to cardiomyocyte death (apoptosis and necrosis), fibrosis, neovascularization, and heart failure. Then, we address the potential of miR-diagnosis (miRNAs as disease biomarkers) and miR-drugs (miRNAs as therapeutic targets) for cardiac ischemia and heart failure. Finally, we evaluate the use of miRNAs in the emerging field of regenerative medicine.  相似文献   

20.
Kuhn argued against both the correspondence theory of truth and convergent realism. Although he likely misunderstood the nature of the correspondence theory, which it seems he wrongly believed to be an epistemic theory, Kuhn had an important epistemic point to make. He maintained that any assessment of correspondence between beliefs and reality is not possible, and therefore, the acceptance of beliefs and the presumption of their truthfulness has to be decided on the basis of other criteria. I will show that via Kuhn’s suggested epistemic values, specifically via problem-solving, his philosophy can be incorporated into a coherentist epistemology. Further, coherentism is, in principle, compatible with convergent realism. However, an argument for increasing likeness to truth requires appropriate historical continuity. Kuhn maintained that the history of science is full of discontinuity, and therefore, the historical condition of convergent realism is not satisfied.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号