首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 275 毫秒
1.
In his book, The Material Theory of Induction, Norton argues that the quest for a universal formal theory or ‘schema’ for analogical inference should be abandoned. In its place, he offers the “material theory of analogy”: each analogical inference is “powered” by a local fact of analogy rather than by any formal schema. His minimalist model promises a straightforward, fact-based approach to the evaluation and justification of analogical inferences. This paper argues that although the rejection of universal schemas is justified, Norton's positive theory is limited in scope: it works well only for a restricted class of analogical inferences. Both facts and quasi-formal criteria have roles to play in a theory of analogical reasoning.  相似文献   

2.
The existence of unitarily inequivalent representations in quantum field theory has been presented as a serious problem for structural realism. In this paper I explore two possible responses. The first involves adopting Wallace's ‘naïve Lagrangian’ interpretation of QFT and dismissing the generation of inequivalent representations as either a mathematical artefact or as non-pathological. The second takes up Ruetsche's ‘Swiss Army Knife’ approach and understands the relevant structure as spanning a range of possibilities. Both options present interesting implications for structural realism and I shall also consider related issues to do with underdetermination, the significance of spontaneous symmetry breaking and how we should understand superselection rules in the context of quantum statistics. Finally, I shall suggest a way in which these options might be combined.  相似文献   

3.
This paper advocates the reduction of the inference of common cause to that of common origins. It distinguishes and subjects to critical analysis thirteen interpretations of “the inference of common cause” whose conclusions do not follow from their assumptions. Instead, I introduce six types of inferences of common origins of information signals from their receivers to reduce, in the sense of supersede and replace, the thirteen inferences of common causes. I show how the paradigmatic examples of inferences of common cause, as well as a broader scope of inferences in the historical sciences, are better explained by inferences of origins.Inferences of origins from information rich coherences between receivers of information signals both fit more closely and explain better the range of examples that have traditionally been associated with inferences of common causes, as well as a broader scope of examples from the historical sciences. Shannon's concept of information as reduction in uncertainty, rather than physicalist concepts of information that relate it to entropy or waves, simplifies the inferences, preempts objections, and avoids the underdetermination of conclusions that challenge models of inferences of common causes.In the second part of the paper I model inferences of common origins from information preserved in their receivers. I distinguish information poor inferences that there were some common origins of receivers from the information richer inferences of ranges of possible common origins and the information transmission channels by which they transmitted signals to receivers. Lastly and most information rich, I distinguish the inference of the defining properties of common origins. The information transmission model from origins to receivers allows the reconceptualization of the concepts of "independence" as absence of intersections between information channels and "reliability" as the preservation of information from origins in receivers. Finally, I show how inferences of origins form the epistemic basis of the historical sciences.  相似文献   

4.
While in speculative markets forward prices could be regarded as natural predictors for future spot rates, empirically, forward prices often fail to indicate ex ante the direction of price movements. In terms of forecasting, the random walk approximation of speculative prices has been established to provide ‘naive’ predictors that are most difficult to outperform by both purely backward‐looking time series models and more structural approaches processing information from forward markets. We empirically assess the implicit predictive content of forward prices by means of wavelet‐based prediction of two foreign exchange (FX) rates and the price of Brent oil quoted either in US dollars or euros. Essentially, wavelet‐based predictors are smoothed auxiliary (padded) time series quotes that are added to the sample information beyond the forecast origin. We compare wavelet predictors obtained from padding with constant prices (i.e. random walk predictors) and forward prices. For the case of FX markets, padding with forward prices is more effective than padding with constant prices, and, moreover, respective wavelet‐based predictors outperform purely backward‐looking time series approaches (ARIMA). For the case of Brent oil quoted in US dollars, wavelet‐based predictors do not signal predictive content of forward prices for future spot prices. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
We first present scenario analysis as a qualitative forecasting technique useful for strategic planning. Then we develop an overview of the two classes of methods for scenario analysis described in the literature. Based on both classes, a new method is developed which especially fits the needs of strategic planning. The method can be divided into three stages: 1. Determination of compatible scenarios, 2. Determination of scenario probabilities, and 3. Determination of main scenarios. An example is given to illustrate the method.  相似文献   

6.
This paper offers a step-by-step analysis of a heuristic approach to scenario planning, taking a managerial perspective. The scenario method is contrasted in general with more traditional planning techniques, which tend to perform less well when faced with high uncertainty and complexity. An actual case involving a manufacturing company is used to illustrate the main steps of the proposed heuristic. Its essence is to identify relevant trends and uncertainties, and blend them into scenarios that are internally consistent. In addition, the scenarios should bound the range of plausible uncertainties and challenge managerial thinking. Links to decision making are examined next, including administrative policies as well as integrative techniques. At the strategic level, a key-success-factor matrix is proposed for integrating scenarios, competitor analysis and strategic vision. At the operational level, Monte Carlo simulation is suggested and illustrated as one useful technique for combining scenario thinking with formal project evaluation (after appropriate translations). The paper concludes with a general discussion of scenario planning, to place it in a broader perspective.  相似文献   

7.
8.
Ethnographic analogy, the use of comparative data from anthropology to inform reconstructions of past human societies, has a troubled history. Archaeologists often express concern about, or outright reject, the practice—and sometimes do so in problematically general terms. This is odd, as (or so I argue) the use of comparative data in archaeology is the same pattern of reasoning as the ‘comparative method’ in biology, which is a well-developed and robust set of inferences which play a central role in discovering the biological past. In pointing out this continuity, I argue that there is no ‘special pleading’ on the part of archaeologists in this regard: biologists must overcome analogous epistemic difficulties in their use of comparative data. I then go on to emphasize the local, empirically tractable ways in which particular ethnographic analogies may be licensed.  相似文献   

9.
We often rely on symmetries to infer outcomes’ probabilities, as when we infer that each side of a fair coin is equally likely to come up on a given toss. Why are these inferences successful? I argue against answering this question with an a priori indifference principle. Reasons to reject such a principle are familiar, yet instructive. They point to a new, empirical explanation for the success of our probabilistic predictions. This has implications for indifference reasoning generally. I argue that a priori symmetries need never constrain our probability attributions, even for initial credences.  相似文献   

10.
According to inference to the best explanation (IBE), scientists infer the loveliest of competing hypotheses, ‘loveliness’ being explanatory virtue. This generates two key objections: that loveliness is too subjective to guide inference, and that it is no guide to truth. I defend IBE using Thomas Kuhn’s notion of exemplars: the scientific theories, or applications thereof, that define Kuhnian normal science and facilitate puzzle-solving. I claim that scientists infer the explanatory puzzle-solution that best meets the standard set by the relevant exemplar of loveliness. Exemplars are the subject of consensus, eliminating subjectivity; divorced from Kuhnian relativism, they give loveliness the context-sensitivity required to be truth-tropic. The resulting account, ‘Kuhnian IBE’, is independently plausible and offers a partial rapprochement between IBE and Kuhn’s account of science.  相似文献   

11.
The reported experiment took place in a professional forecasting organization accustomed to giving verbal probability assessments (‘likely’, ‘probable’, etc.). It attempts to highlight the communication problems caused by verbal probability expressions and to offer possible solutions that are compatible with the forecasters overall perspective on their jobs Experts in the organization were first asked to give a numerical translation to 30 different verbal probability expressions most of which were taken from the organization's own published political forecasts. In a second part of the experiment the experts were given 15 paragraphs selected from the organization's political publications each of which contained at least one verbal expression of probability. Subjects were again asked to give a numerical translation to each verbal probability expression The results indicate that (a) there is a high variability in the interpretation of verbal probability expressions and (b) the variability is even higher in context. Possible reasons for the context effect are discussed and practical implications are suggested.  相似文献   

12.
A Hidden Markov Model (HMM) is used to classify an out‐of‐sample observation vector into either of two regimes. This leads to a procedure for making probability forecasts for changes of regimes in a time series, i.e. for turning points. Instead of estimating past turning points using maximum likelihood, the model is estimated with respect to known past regimes. This makes it possible to perform feature extraction and estimation for different forecasting horizons. The inference aspect is emphasized by including a penalty for a wrong decision in the cost function. The method, here called a ‘Markov Bayesian Classifier (MBC)’, is tested by forecasting turning points in the Swedish and US economies, using leading data. Clear and early turning point signals are obtained, contrasting favourably with earlier HMM studies. Some theoretical arguments for this are given. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
According to the historian and sociologist of science Terry Shinn, the creator of the concept of ‘research technologies’: “Research technologies may sometimes generate promising packets of instrumentation for yet undefined ends. They may offer technological answers to questions that have hardly been raised. Research technologists׳s instruments are then generic in the sense that they are base-line apparatus which can subsequently be transformed by experimenters into products tailored to specific economic ends or adapted by experimenters to further cognitive ends in academic research.”1 Genericity thus manifests one of three fundamental characteristics of research technologies. At the same time, however, each research technology emerges out of the specific disciplinary context in which it is initially developed with entirely concrete aims. Consequently, genericity does not exist from the outset but first has to form, along a path that remains to be clarified. It is produced or constructed by the actors on two levels: as an instrument in the laboratory and as a way of speaking at the representational level. This issue yields the structure of this paper. Three options for the transition of a specific technique into a generic research technology are compared. One of them proves to be the most frequent pattern of this dynamic. This is explored further, taking as paradigmatic examples ‘computed tomography’ (CT), ‘nuclear magnetic resonance׳ (NMR) and its application known as ‘magnetic resonance imaging’ (MRI), together with several additional examples.  相似文献   

14.
15.
The role of γ-aminobutyric acid (GABA) as a signal in animals has been documented for over 60 years. In contrast, evidence that GABA is a signal in plants has only emerged in the last 15 years, and it was not until last year that a mechanism by which this could occur was identified—a plant ‘GABA receptor’ that inhibits anion passage through the aluminium-activated malate transporter family of proteins (ALMTs). ALMTs are multigenic, expressed in different organs and present on different membranes. We propose GABA regulation of ALMT activity could function as a signal that modulates plant growth, development, and stress response. In this review, we compare and contrast the plant ‘GABA receptor’ with mammalian GABAA receptors in terms of their molecular identity, predicted topology, mode of action, and signalling roles. We also explore the implications of the discovery that GABA modulates anion flux in plants, its role in signal transduction for the regulation of plant physiology, and predict the possibility that there are other GABA interaction sites in the N termini of ALMT proteins through in silico evolutionary coupling analysis; we also explore the potential interactions between GABA and other signalling molecules.  相似文献   

16.
This paper provides an account of the ‘use-value’ of case-based research by showing how social scientists exploit cases, and case studies, in a variety of practices of inference and extension. The critical basis for making such extensions relies on the power of a case, or the account given of a case (the case-study account), to exemplify certain features of the social world in ways which prove valuable for further analysis: either of the same case, or in many domains beyond the original case study. Framing use-values in terms of exemplification compares favourably with understanding reasoning beyond the case either as a form of analogical reasoning or in taking cases as experimentable objects.  相似文献   

17.
B. R. Frieden uses a single procedure, called extreme physical information, with the aim of deriving ‘most known physics, from statistical mechanics and thermodynamics to quantum mechanics, the Einstein field equations and quantum gravity’. His method, which is based on Fisher information, is given a detailed exposition in this book, and we attempt to assess the extent to which he succeeds in his task.  相似文献   

18.
In Descartes, the concept of a ‘universal science’ differs from that of a ‘mathesis universalis’, in that the latter is simply a general theory of quantities and proportions. Mathesis universalis is closely linked with mathematical analysis; the theorem to be proved is taken as given, and the analyst seeks to discover that from which the theorem follows. Though the analytic method is followed in the Meditations, Descartes is not concerned with a mathematisation of method; mathematics merely provides him with examples. Leibniz, on the other hand, stressed the importance of a calculus as a way of representing and adding to what is known, and tried to construct a ‘universal calculus’ as part of his proposed universal symbolism, his ‘characteristica universalis’. The characteristica universalis was never completed—it proved impossible, for example, to list its basic terms, the ‘alphabet of human thoughts’—but parts of it did come to fruition, in the shape of Leibniz's infinitesimal calculus and his various logical calculi. By his construction of these calculi, Leibniz proved that it is possible to operate with concepts in a purely formal way.  相似文献   

19.
Most of our knowledge of Greek and Roman scientific practice and its place in ancient culture is derived from our study of ancient texts. In the last few decades, this written evidence—ancient technical or specialist literature—has begun to be studied using tools of literary analysis to help answer questions about, for instance, how these works were composed, their authors’ intentions and the expectations of their readers.This introduction to Structures and strategies in ancient Greek and Roman technical writing provides an overview of recent scholarship in the area, and the difficulty in pinning down what ‘technical/specialist literature’ might mean in an ancient context, since Greek and Roman authors communicated scientific knowledge using a wide variety of styles and forms of text (e.g. poetry, dialogues, letters).An outline of the three sections is provided: Form as a mirror of method, in which Sabine Föllinger and Alexander Mueller explore ways in which the structures of texts by Aristotle and Plutarch may reflect methodological concerns; Authors and their implied readers, with contributions by Oliver Stoll, David Creese, Boris Dunsch and Paula Olmos, which examines what ancient texts can tell us about the place of technical knowledge in antiquity; Science and the uses of poetry, with articles by Jochen Althoff, Michael Coxhead and Laurence Totelin, and a new English translation of the Aetna poem by Harry Hine, which explores the (to us) unexpected roles of poetry in ancient scientific culture.  相似文献   

20.
The promise of treatments for common complex diseases (CCDs) is understood as an important force driving large scale genetics research over the last few decades. This paper considers the phenomenon of the Genome Wide Association Study (GWAS) via one high profile example, the Wellcome Trust Case Control Consortium (WTCCC). The WTCCC despite not fulfilling promises of new health interventions is still understood as an important step towards tackling CCDs clinically. The ‘sociology of expectations’ has considered many examples of failure to fulfil promises and the subsequent negative consequences including disillusionment, disappointment and disinvestment. In order to explore why some domains remain resilient in the face of apparent failure, I employ the concept of the ‘problematic’ found in the work of Giles Deleuze. This alternative theoretical framework challenges the idea that the failure to reach promised goals results in largely negative outcomes for a given field. I will argue that collective scientific action is motivated not only by hopes for the future but also by the drive to create solutions to the actual setbacks and successes which scientists encounter in their day-to-day work. I draw on eighteen interviews.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号