首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Climate scientists have been engaged in a decades-long debate over the standing of satellite measurements of the temperature trends of the atmosphere above the surface of the earth. This is especially significant because skeptics of global warming and the greenhouse effect have utilized this debate to spread doubt about global climate models used to predict future states of climate. I use this case from an understudied science to illustrate two distinct philosophical approaches to the relations among data, scientist, measurement, models, and theory. I argue that distinguishing between ‘direct’ empiricist and ‘complex’ empiricist approaches helps us understand and analyze this important scientific episode. I also introduce a complex empiricist account of testing and evaluation, and contrast it with the basic Hypothetico-Deductive approach to the climate models used by the direct empiricists. This more developed complex empiricist approach will serve philosophy of science well, as computational models become more widespread in the sciences.  相似文献   

2.
In the last decade much has been made of the role that models play in the epistemology of measurement. Specifically, philosophers have been interested in the role of models in producing measurement outcomes. This discussion has proceeded largely within the context of the physical sciences, with notable exceptions considering measurement in economics. However, models also play a central role in the methods used to develop instruments that purport to quantify psychological phenomena. These methods fall under the umbrella term ‘psychometrics’. In this paper, we focus on Clinical Outcome Assessments (COAs) and discuss two measurement theories and their associated models: Classical Test Theory (CTT) and Rasch Measurement Theory. We argue that models have an important role to play in coordinating theoretical terms with empirical content, but to do so they must serve: 1) as a representation of the measurement interaction; and 2) in conjunction with a theory of the attribute in which we are interested. We conclude that Rasch Measurement Theory is a more promising approach than CTT in these regards despite the latter's popularity with health outcomes researchers.  相似文献   

3.
This paper shows how monthly data and forecasts can be used in a systematic way to improve the predictive accuracy of a quarterly macroeconometric model. The problem is formulated as a model pooling procedure (equivalent to non-recursive Kalman filtering) where a baseline quarterly model forecast is modified through ‘add-factors’ or ‘constant adjustments’. The procedure ‘automatically’ constructs these adjustments in a covariance-minimizing fashion to reflect the revised expectation of the quarterly model's forecast errors, conditional on the monthly information set. Results obtained using Federal Reserve Board models indicate the potential for significant reduction in forecast error variance through application of these procedures.  相似文献   

4.
This article stresses how little is known about the quality, particularly the relative quality, of macroeconometric models. Most economists make a strict distinction between the quality of a model per se and the accuracy of solutions based on that model. While this distinction is valid, it leaves unanswered how to compare the‘validity’of conditional models. The standard test, the accuracy of ex post simulations, is not definitive when models with differing degrees of exogeneity are compared. In addition, it is extremely difficult to estimate the relative quantitative importance of conceptual problems of models, such as parameter instability across‘policy regimes’ In light of the difficulty in comparisons of conditional macroeconometric models, many model-builders and users assume that the best models are those that have been used to make the most accurate forecasts are those made with the best models. Forecasting experience indicates that forecasters using macroeconometric models have produced more accurate macroeconomic forecasts than either naive or sophisticated unconditional statistical models. It also suggests that judgementally adjusted forecasts have been more accurate than model-based forecasts generated mechanically. The influence of econometrically-based forecasts is now so pervasive that it is difficult to find examples of‘purely judgemental’forecasts.  相似文献   

5.
6.
Probabilistic forecasts have good ‘external correspondence’ if events that are assigned probabilities close to 1 tend to occur frequently, whereas those assigned probabilities near 0 tend to occur rarely. This paper describes simple procedures for analysing external correspondence into meaningful components that might guide efforts to understand and improve forecasting performance. The procedures focus on differences between the judgements made by the forecaster when the target event occurs, as compared to when it does not. The illustrations involve a professional oddsmaker's predictions of baseball game outcomes, meteorologists' precipitation forecasts and physicians' diagnoses of pneumonia. The illustrations demonstrate the ability of the procedures to highlight important forecasting tendencies that are sometimes more difficult to discern by other means.  相似文献   

7.
Using a structural time‐series model, the forecasting accuracy of a wide range of macroeconomic variables is investigated. Specifically of importance is whether the Henderson moving‐average procedure distorts the underlying time‐series properties of the data for forecasting purposes. Given the weight of attention in the literature to the seasonal adjustment process used by various statistical agencies, this study hopes to address the dearth of literature on ‘trending’ procedures. Forecasts using both the trended and untrended series are generated. The forecasts are then made comparable by ‘detrending’ the trended forecasts, and comparing both series to the realised values. Forecasting accuracy is measured by a suite of common methods, and a test of significance of difference is applied to the respective root mean square errors. It is found that the Henderson procedure does not lead to deterioration in forecasting accuracy in Australian macroeconomic variables on most occasions, though the conclusions are very different between the one‐step‐ahead and multi‐step‐ahead forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
In the light of the still topical nature of ‘bananas and petrol’ being blamed for driving much of the inflationary pressures in Australia in recent times, the ‘headline’ and ‘underlying’ rates of inflation are scrutinised in terms of forecasting accuracy. A general structural time‐series modelling strategy is applied to estimate models for alternative types of Consumer Price Index (CPI) measures. From this, out‐of‐sample forecasts are generated from the various models. The underlying forecasts are subsequently adjusted to facilitate comparison. The Ashley, Granger and Schmalensee (1980) test is then performed to determine whether there is a statistically significant difference between the root mean square errors of the models. The results lend weight to the recent findings of Song (2005) that forecasting models using underlying rates are not systematically inferior to those based on the headline rate. In fact, strong evidence is found that underlying measures produce superior forecasts. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
This paper presents the writer's experience, over a period of 25 years, in analysing organizational systems and, in particular, concentrates on the overall forecasting activity. The paper first looks at the relationship between forecasting and decision taking–with emphasis on the fact that forecasting is a means to aid decision taking and not an end in itself. It states that there are many types of forecasting problems, each requiring different methods of treatment. The paper then discusses attitudes which are emerging about the relative advantages of different forecasting techniques. It suggests a model building process which requires‘experience’and‘craftsmanship’, extensive practical application, frequent interaction between theory and practice and a methodology that eventually leads to models that contain no detectable inadequacies. Furthermore, it argues that although models which forecast a time series from its past history have a very important role to play, for effective policy making it is necessary to augment the model by introducing policy variables, again in a systematic not an ‘ad hoc’ manner. Finally, the paper discusses how forecasting systems can be introduced into the management process in the first place and how they should be monitored and updated when found wanting.  相似文献   

10.
This paper investigates the significance of T-duality in string theory: the indistinguishability with respect to all observables, of models attributing radically different radii to space—larger than the observable universe, or far smaller than the Planck length, say. Two interpretational branch points are identified and discussed. First, whether duals are physically equivalent or not: by considering a duality of the familiar simple harmonic oscillator, I argue that they are. Unlike the oscillator, there are no measurements ‘outside’ string theory that could distinguish the duals. Second, whether duals agree or disagree on the radius of ‘target space’, the space in which strings evolve according to string theory. I argue for the latter position, because the alternative leaves it unknown what the radius is. Since duals are physically equivalent yet disagree on the radius of target space, it follows that the radius is indeterminate between them. Using an analysis of Brandenberger and Vafa (1989), I explain why—even so—space is observed to have a determinate, large radius. The conclusion is that observed, ‘phenomenal’ space is not target space, since a space cannot have both a determinate and indeterminate radius: instead phenomenal space must be a higher-level phenomenon, not fundamental.  相似文献   

11.
The London and Bauer monograph occupies a central place in the debate concerning the quantum measurement problem. Gavroglu has previously noted the influence of Husserlian phenomenology on London's scientific work. However, he has not explored the full extent of this influence in the monograph itself. I begin this paper by outlining the important role played by the monograph in the debate. In effect, it acted as a kind of ‘lens’ through which the standard, or Copenhagen, ‘solution’ to the measurement problem came to be perceived and, as such, it was robustly criticized, most notably by Putnam and Shimony. I then spell out the Husserlian understanding of consciousness in order to illuminate the traces of this understanding within the London and Bauer text. This, in turn, yields a new perspective on this ‘solution’ to the measurement problem, one that I believe has not been articulated before and, furthermore, which is immune to the criticisms of Putnam and Shimony.  相似文献   

12.
The development of nineteenth-century geodetic measurement challenges the dominant coherentist account of metric success. Coherentists argue that measurements of a parameter are successful if their numerical outcomes convergence across varying contextual constraints. Aiming at numerical convergence, in turn, offers an operational aim for scientists to solve problems of coordination. Geodesists faced such a problem of coordination between two indicators of the earth's polar flattening, which were both based on imperfect ellipsoid models. While not achieving numerical convergence, their measurements produced novel data that grounded valuable theoretical hypotheses. Consequently, they ought to be regarded as epistemically successful. This insight warrants a dynamic revision of coherentism, which allows to judge the success of a metric based on both its coherence and fruitfulness. On that view, scientific measurement aims to coordinate theoretical definitions and produce novel data and theoretical insights.  相似文献   

13.
While forecasting involves forward/predictive thinking, it depends crucially on prior diagnosis for suggesting a model of the phenomenon, for defining‘relevant’variables, and for evaluating forecast accuracy via the model. The nature of diagnostic thinking is examined with respect to these activities. We first consider the difficulties of evaluating forecast accuracy without a causal model of what generates outcomes. We then discuss the development of models by considering how attention is directed to variables via analogy and metaphor as well as by what is unusual or abnormal. The causal relevance of variables is then assessed by reference to probabilistic signs called‘cues to causality’. These are: temporal order, constant conjunction, contiguity in time and space, number of alternative explanations, similarity, predictive validity, and robustness. The probabilistic nature of the cues is emphasized by discussing the concept of spurious correlation and how causation does not necessarily imply correlation. Implications for improving forecasting are considered with respect to the above issues.  相似文献   

14.
A number of researchers have developed models that use test market data to generate forecasts of a new product's performance. However, most of these models have ignored the effects of marketing covariates. In this paper we examine what impact these covariates have on a model's forecasting performance and explore whether their presence enables us to reduce the length of the model calibration period (i.e. shorten the duration of the test market). We develop from first principles a set of models that enable us to systematically explore the impact of various model ‘components’ on forecasting performance. Furthermore, we also explore the impact of the length of the test market on forecasting performance. We find that it is critically important to capture consumer heterogeneity, and that the inclusion of covariate effects can improve forecast accuracy, especially for models calibrated on fewer than 20 weeks of data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

15.
This study reports the results of an experiment that examines (1) the effects of forecast horizon on the performance of probability forecasters, and (2) the alleged existence of an inverse expertise effect, i.e., an inverse relationship between expertise and probabilistic forecasting performance. Portfolio managers are used as forecasters with substantive expertise. Performance of this ‘expert’ group is compared to the performance of a ‘semi-expert’ group composed of other banking professionals trained in portfolio management. It is found that while both groups attain their best discrimination performances in the four-week forecast horizon, they show their worst calibration and skill performances in the 12-week forecast horizon. Also, while experts perform better in all performance measures for the one-week horizon, semi-experts achieve better calibration for the four-week horizon. It is concluded that these results may signal the existence of an inverse expertise effect that is contingent on the selected forecast horizon.  相似文献   

16.
This article introduces a novel framework for analysing long‐horizon forecasting of the near non‐stationary AR(1) model. Using the local to unity specification of the autoregressive parameter, I derive the asymptotic distributions of long‐horizon forecast errors both for the unrestricted AR(1), estimated using an ordinary least squares (OLS) regression, and for the random walk (RW). I then identify functions, relating local to unity ‘drift’ to forecast horizon, such that OLS and RW forecasts share the same expected square error. OLS forecasts are preferred on one side of these ‘forecasting thresholds’, while RW forecasts are preferred on the other. In addition to explaining the relative performance of forecasts from these two models, these thresholds prove useful in developing model selection criteria that help a forecaster reduce error. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
This re-examination of the earliest version of Maxwell's most important argument for the electromagnetic theory of light—the equality between the speed of wave propagation in the electromagnetic ether and the ratio of electrostatic to electromagnetic measures of electrical quantity—establishes unforeseen connections between Maxwell's theoretical electrical metrology and his mechanical theory of the electromagnetic field. Electrical metrology was not neutral with respect to field-theoretic versus action-at-a-distance conceptions of electro-magnetic interaction. Mutual accommodation between these conceptions was reached by Maxwell on the British Association for the Advancement of Science (BAAS) Committee on Electrical Standards by exploiting the measurement of the medium parameters—electric inductive capacity and magnetic permeability—on an arbitrary scale. While he always worked within this constraint in developing the ‘ratio-of-units’ argument mathematically, I maintain that Maxwell came to conceive of the ratio ‘as a velocity’ by treating the medium parameters as physical quantities that could be measured absolutely, which was only possible via the correspondences between electrical and mechanical quantities established in the mechanical theory. I thereby correct two closely-related misconceptions of the ratio-of-units argument—the counterintuitive but widespread notion that the ratio is naturally a speed, and the supposition that Maxwell either inferred or proved this from its dimensional formula.  相似文献   

18.
The promise of treatments for common complex diseases (CCDs) is understood as an important force driving large scale genetics research over the last few decades. This paper considers the phenomenon of the Genome Wide Association Study (GWAS) via one high profile example, the Wellcome Trust Case Control Consortium (WTCCC). The WTCCC despite not fulfilling promises of new health interventions is still understood as an important step towards tackling CCDs clinically. The ‘sociology of expectations’ has considered many examples of failure to fulfil promises and the subsequent negative consequences including disillusionment, disappointment and disinvestment. In order to explore why some domains remain resilient in the face of apparent failure, I employ the concept of the ‘problematic’ found in the work of Giles Deleuze. This alternative theoretical framework challenges the idea that the failure to reach promised goals results in largely negative outcomes for a given field. I will argue that collective scientific action is motivated not only by hopes for the future but also by the drive to create solutions to the actual setbacks and successes which scientists encounter in their day-to-day work. I draw on eighteen interviews.  相似文献   

19.
Careful forecasts, as accurate as possible, are central to the successful implementation of policy. There are fundamental reasons why policy makers cannot ‘play by ear’, adjusting policy quickly to each unexpected deviation in economic outcomes. Specific incidents are described where economic policy went awry because of faulty forecasts. The policy process is described in detail to show precisely where the forecast enters. Forecasting as a validation tool for establishing credibility in policy formation is analysed and discussed. Some estimated measure of forecast accuracy is presented, together with commentary on the necessary degrees of precision for successful implementation of policy.  相似文献   

20.
The analytical notions of ‘thought style’, ‘paradigm’, ‘episteme’ and ‘style of reasoning’ are some of the most popular frameworks in the history and philosophy of science. Although their proponents, Ludwik Fleck, Thomas Kuhn, Michel Foucault, and Ian Hacking, are all part of the same philosophical tradition that closely connects history and philosophy, the extent to which they share similar assumptions and objectives is still under debate. In the first part of the paper, I shall argue that, despite the fact that these four thinkers disagree on certain assumptions, their frameworks have the same explanatory goal – to understand how objectivity is possible. I shall present this goal as a necessary element of a common project -- that of historicising Kant's a priori. In the second part of the paper, I shall make an instrumental use of the insights of these four thinkers to form a new model for studying objectivity. I shall also propose a layered diagram that allows the differences between the frameworks to be mapped, while acknowledging their similarities. This diagram will show that the frameworks of style of reasoning and episteme illuminate conditions of possibility that lie at a deeper level than those considered by thought styles and paradigms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号