首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In this paper we investigate the feasibility of algorithmically deriving precise probability forecasts from imprecise forecasts. We provide an empirical evaluation of precise probabilities that have been derived from two types of imprecise probability forecasts: probability intervals and probability intervals with second-order probability distributions. The minimum cross-entropy (MCE) principle is applied to the former to derive precise (i.e. additive) probabilities; expectation (EX) is used to derive precise probabilities in the latter case. Probability intervals that were constructed without second-order probabilities tended to be narrower than and contained in those that were amplified by second-order probabilities. Evidence that this narrowness is due to motivational bias is presented. Analysis of forecasters' mean Probability Scores for the derived precise probabilities indicates that it is possible to derive precise forecasts whose external correspondence is as good as directly assessed precise probability forecasts. The forecasts of the EX method, however, are more like the directly assessed precise forecasts than those of the MCE method.  相似文献   

2.
In this paper we make an empirical investigation of the relationship between the consistency, coherence and validity of probability judgements in a real-world forecasting context. Our results indicate that these measures of the adequacy of an individual's probability assessments are not closely related as we anticipated. Twenty-nine of our thirty-six subjects were better calibrated in point probabilities than in odds and our subjects were, in general more coherent using point probabilities than odds forecasts. Contrary to our expectations we found very little difference in forecasting response and performance between simple and compound holistic forecasts. This result is evidence against the ‘divide-and-conquer’ rationale underlying most applications of normative decision theory. In addition, our recompositions of marginal and conditional assessments into compound forecasts were no better calibrated or resolved than their holistic counterparts. These findings convey two implications for forecasting. First, untrained judgemental forecasters should use point probabilities in preference to odds. Second, judgemental forecasts of complex compound probabilities may be as well assessed holistically as they are using methods of decomposition and recomposition. In addition, our study provides a paradigm for further studies of the relationship between consistency, coherence and validity in judgemental probability forecasting.  相似文献   

3.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
Multi-process models are particularly useful when observations appear extreme relative to their forecasts, because they allow for explanations of any behaviour of a time series, considering more generating sources simultaneously. In this paper, the multi-process approach is extended by developing a dynamic procedure to assess the weights of the various sources, alias the prior probabilities of the rival models, that compete in the collection to make forecasts. The new criterion helps the forecasting system to learn about the most plausible scenarios for the time series, considering all the combinations of consecutive models to be a function of the magnitude of the one-step-ahead forecast error. Throughout the paper, the different treatments of outliers and structural changes are highlighted using the concepts of robustness and sensitivity. Finally, the dynamic selection procedure is tested on the CP6 dataset, showing an effective improvement in the overall predictive ability of multi-process models whenever anomalous observations occur. © 1997 John Wiley & Sons, Ltd.  相似文献   

5.
Ashley (Journal of Forecasting 1983; 2 (3): 211–223) proposes a criterion (known as Ashley's index) to judge whether the external macroeconomic variables are well forecast to serve as explanatory variables in forecasting models, which is crucial for policy makers. In this article, we try to extend Ashley's work by providing three testing procedures, including a ratio‐based test, a difference‐based test, and the Bayesian approach. The Bayesian approach has the advantage of allowing the flexibility of adapting all possible information content within a decision‐making environment such as the change of variable's definition due to the evolving system of national accounts. We demonstrate the proposed methods by applying six macroeconomic forecasts in the Survey of Professional Forecasters. Researchers or practitioners can thus formally test whether the external information is helpful. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

6.
Forecasting methods are often valued by means of simulation studies. For intermittent demand items there are often very few non–zero observations, so it is hard to check any assumptions, because statistical information is often too weak to determine, for example, distribution of a variable. Therefore, it seems important to verify the forecasting methods on the basis of real data. The main aim of the article is an empirical verification of several forecasting methods applicable in case of intermittent demand. Some items are sold only in specific subperiods (in given month in each year, for example), but most forecasting methods (such as Croston's method) give non–zero forecasts for all periods. For example, summer work clothes should have non–zero forecasts only for summer months and many methods will usually provide non–zero forecasts for all months under consideration. This was the motivation for proposing and testing a new forecasting technique which can be applicable to seasonal items. In the article six methods were applied to construct separate forecasting systems: Croston's, SBA (Syntetos–Boylan Approximation), TSB (Teunter, Syntetos, Babai), MA (Moving Average), SES (Simple Exponential Smoothing) and SESAP (Simple Exponential Smoothing for Analogous subPeriods). The latter method (SESAP) is an author's proposal dedicated for companies facing the problem of seasonal items. By analogous subperiods the same subperiods in each year are understood, for example, the same months in each year. A data set from the real company was used to apply all the above forecasting procedures. That data set contained monthly time series for about nine thousand products. The forecasts accuracy was tested by means of both parametric and non–parametric measures. The scaled mean and the scaled root mean squared error were used to check biasedness and efficiency. Also, the mean absolute scaled error and the shares of best forecasts were estimated. The general conclusion is that in the analyzed company a forecasting system should be based on two forecasting methods: TSB and SESAP, but the latter method should be applied only to seasonal items (products sold only in specific subperiods). It also turned out that Croston's and SBA methods work worse than much simpler methods, such as SES or MA. The presented analysis might be helpful for enterprises facing the problem of forecasting intermittent items (and seasonal intermittent items as well).  相似文献   

7.
Recent studies have shown that composite forecasting produces superior forecasts when compared to individual forecasts. This paper extends the existing literature by employing linear constraints and robust regression techniques in composite model building. Security analysts forecasts may be improved when combined with time series forecasts for a diversified sample of 261 firms with a 1980-1982 post-sample estimation period. The mean square error of analyst forecasts may be reduced by combining analyst and univariate time series model forecasts in constrained and unconstrained ordinary least squares regression models. These reductions are very interesting when one finds that the univariate time series model forecasts do not substantially deviate from those produced by ARIMA (0,1,1) processes. Moreover, security analysts' forecast errors may be significantly reduced when constrained and unconstrained robust regression analyses are employed.  相似文献   

8.
When managers make revisions to sales forecasts initially generated by a rational quantitative model it is important that the particular forecasts selected for adjustment are those which would benefit most from the adjustment process (i.e. realize high errors). This study reports an empirical investigation on this issue, spanning six quarterly forecasting periods and incorporating forecasting data on over 850 products. The results show that the errors of the forecasts chosen for revision are, in general, higher than those which were not chosen. In addition, it is shown that managesrs tend to revise forecasts which are initially low, hence possibily introducing some degree of bias into the overall forecasts.  相似文献   

9.
This paper describes procedures for forecasting countries' output growth rates and medians of a set of output growth rates using Hierarchical Bayesian (HB) models. The purpose of this paper is to show how the γ‐shrinkage forecast of Zellner and Hong ( 1989 ) emerges from a hierarchical Bayesian model and to describe how the Gibbs sampler can be used to fit this model to yield possibly improved output growth rate and median output growth rate forecasts. The procedures described in this paper offer two primary methodological contributions to previous work on this topic: (1) the weights associated with widely‐used shrinkage forecasts are determined endogenously, and (2) the posterior predictive density of the future median output growth rate is obtained numerically from which optimal point and interval forecasts are calculated. Using IMF data, we find that the HB median output growth rate forecasts outperform forecasts obtained from variety of benchmark models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
Tests of forecast encompassing are used to evaluate one‐step‐ahead forecasts of S&P Composite index returns and volatility. It is found that forecasts over the 1990s made from models that include macroeconomic variables tend to be encompassed by those made from a benchmark model which does not include macroeconomic variables. However, macroeconomic variables are found to add significant information to forecasts of returns and volatility over the 1970s. Often in empirical research on forecasting stock index returns and volatility, in‐sample information criteria are used to rank potential forecasting models. Here, none of the forecasting models for the 1970s that include macroeconomic variables are, on the basis of information criteria, preferred to the relevant benchmark specification. Thus, had investors used information criteria to choose between the models used for forecasting over the 1970s considered in this paper, the predictability that tests of encompassing reveal would not have been exploited. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
Earnings forecasts have received a great deal of attention, much of which has centered on the comparative accuracy of judgmental and objective forecasting methods. Recently, studies have focused on the use of combinations of subjective and objective forecasts to improve forecast accuracy. This research offers an extension on this theme by subjectively modifying an objective forecast. Specifically, ARIMA forecasts are judgmentally adjusted by analysts using a structured approach based on Saaty's (1980) analytic hierarchy process. The results show that the accuracy of the unadjusted objective forecasts can be improved when judgmentally adjusted.  相似文献   

12.
We consider a forecasting problem that arises when an intervention is expected to occur on an economic system during the forecast horizon. The time series model employed is seen as a statistical device that serves to capture the empirical regularities of the observed data on the variables of the system without relying on a particular theoretical structure. Either the deterministic or the stochastic structure of a vector autoregressive error correction model of the system is assumed to be affected by the intervention. The information about the intervention effect is just provided by some linear restrictions imposed on the future values of the variables involved. Formulas for restricted forecasts with intervention effects and their mean squared errors are derived as a particular case of Catlin's static updating theorem. An empirical illustration uses Mexican macroeconomic data on five variables and the restricted forecasts consider targets for years 2011–2014. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
This paper addresses the issue of forecasting individual items within a product line; where each line includes several independent but closely related products. The purpose of the research was to reduce the overall forecasting burden by developing and assessing schemes of disaggregating forecasts of a total product line to the related individual items. Measures were developed to determine appropriate disaggregated methodologies and to compare the forecast accuracy of individual product forecasts versus disaggregated totals. Several of the procedures used were based upon extensions of the combination of forecast research and applied to disaggregations of total forecasts of product lines. The objective was to identify situations when it was advantageous to produce disaggregated forecasts, and if advantageous, which method of disaggregation to utilize. This involved identification of the general conceptual characteristics within a set of product line data that might cause a disaggregation method to produce relatively accurate forecasts. These conceptual characteristics provided guidelines for forecasters on how to select a disaggregation method and under what conditions a particular method is applicable.  相似文献   

14.
Forecasts for the seven major industrial countries, Canada, France, Germany, Italy, Japan, the United Kingdom and the United States, are published on a regular basis in the OECD's Economic Outlook. This paper analyses the accuracy of the OECD annual forecasts of output and price changes and of the current balance in the balance of payments. As a reference basis, the forecasts are compared with those generated by a naive model, a random walk process. The measures of forecasting accuracy used are the mean-absolute error, the root-mean-square error, the median-absolute error, and Theil's inequality coefficient. The OECD forecasts of real GNP changes are significantly superior to those generated by the random walk process; however, the OECD price and current balance forecasts are not significantly more accurate than those obtained from the naive model. The OECD's forecasting performance has neither improved nor deteriorated over time.  相似文献   

15.
The literature on combining forecasts has almost exclusively focused on combining point forecasts. The issues and methods of combining ordinal forecasts have not yet been fully explored, even though ordinal forecasting has many practical applications in business and social research. In this paper, we consider the case of forecasting the movement of the stock market which has three possible states (bullish, bearish and sluggish). Given the sample of states predicted by different forecasters, several statistical and operation research methods can be applied to determine the optimal weight assigned to each forecaster in combining the ordinal forecasts. The performance of these methods is examined using Hong Kong stock market forecasting data, and their accuracies are found to be better than the consensus method and individual forecasts.  相似文献   

16.
Recently, analysts' cash flow forecasts have become widely available through financial information services. Cash flow information enables practitioners to better understand the real operating performance and financial stability of a company, particularly when earnings information is noisy and of low quality. However, research suggests that analysts' cash flow forecasts are less accurate and more dispersed than earnings forecasts. We thus investigate factors influencing cash flow forecast accuracy and build a practical model to distinguish more accurate from less accurate cash flow forecasters, using past cash flow forecast accuracy and analyst characteristics. We find significant power in our cash flow forecast accuracy prediction models. We also find that analysts develop cash flow‐specific forecasting expertise and knowhow, which are distinct from those that analysts acquire from forecasting earnings. In particular, cash flow‐specific information is more useful in identifying accurate cash flow forecasters than earnings‐specific information.Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

17.
Category management—a relatively new function in marketing—involves large-scale, real-time forecasting of multiple data series in complex environments. In this paper, we illustrate how Bayesian Vector Auto regression (BVAR) fulfils the category manager's decision-support requirements by providing accurate forecasts of a category's state variables (prices, volumes and advertising levels), incorporating management interventions (merchandising events such as end-aisle displays), and revealing competitive dynamics through impulse response analyses. Using 124 weeks of point-of-sale scanner data comprising 31 variables for four brands, we compare the out-of-sample forecasts from BVAR to forecasts from exponential smoothing, univariate and multivariate Box-Jenkins transfer function analyses, and multivariate ARMA models. Theil U's indicate that BVAR forecasts are superior to those from alternate approaches. In large-scale forecasting applications, BVAR's ease of identification and parsimonious use of degrees of freedom are particularly valuable.  相似文献   

18.
This paper is concerned primarily with the evaluation and comparison of objective and subjective weather forecasts. Operational forecasts of three weather elements are considered: (1) probability forecasts of precipitation occurrence, (2) categorical (i.e. non-probabilistic) forecasts of maximum and minimum temperatures and (3) categorical forecasts of cloud amount. The objective forecasts are prepared by numerical-statistical procedures, whereas the subjective forecasts are based on the judgements of individual forecasters. In formulating the latter, the forecasters consult information from a variety of sources, including the objective forecasts themselves. The precipitation probability forecasts are found to be both reliable and skilful, and evaluation of the temperature/cloud amount forecasts reveals that they are quite accurate/skilful. Comparison of the objective and subjective forecasts of precipitation occurrence indicates that the latter are generally more skilful than the former for shorter lead times (e.g. 12–24 hours), whereas the two types of forecasts are of approximately equal skill for longer lead times (e.g. 36–48 hours). Similar results are obtained for the maximum and minimum temperature forecasts. Objective cloud amount forecasts are more skilful than subjective cloud amount forecasts for all lead times. Examination of trends in performance over the last decade reveals that both types of forecasts for all three elements increased in skill (or accuracy) over the period, with improvements in objective forecasts equalling or exceeding improvements in subjective forecasts. The role and impact of the objective forecasts in the subjective weather forecasting process are discussed in some detail. The need to conduct controlled experiments and other studies of this process, with particular reference to the assimilation of information from different sources, is emphasized. Important characteristics of the forecasting system in meteorology are identified, and they are used to describe similarities and differences between weather forecasting and forecasting in other fields. Acquisition of some of these characteristics may be beneficial to other forecasting systems.  相似文献   

19.
Time-series data are often contaminated with outliers due to the influence of unusual and non-repetitive events. Forecast accuracy in such situations is reduced due to (1) a carry-over effect of the outlier on the point forecast and (2) a bias in the estimates of model parameters. Hillmer (1984) and Ledolter (1989) studied the effect of additive outliers on forecasts. It was found that forecast intervals are quite sensitive to additive outliers, but that point forecasts are largely unaffected unless the outlier occurs near the forecast origin. In such a situation the carry-over effect of the outlier can be quite substantial. In this study, we investigate the issues of forecasting when outliers occur near or at the forecast origin. We propose a strategy which first estimates the model parameters and outlier effects using the procedure of Chen and Liu (1993) to reduce the bias in the parameter estimates, and then uses a lower critical value to detect outliers near the forecast origin in the forecasting stage. One aspect of this study is on the carry-over effects of outliers on forecasts. Four types of outliers are considered: innovational outlier, additive outlier, temporary change, and level shift. The effects due to a misidentification of an outlier type are examined. The performance of the outlier detection procedure is studied for cases where outliers are near the end of the series. In such cases, we demonstrate that statistical procedures may not be able to effectively determine the outlier types due to insufficient information. Some strategies are recommended to reduce potential difficulties caused by incorrectly detected outlier types. These findings may serve as a justification for forecasting in conjunction with judgment. Two real examples are employed to illustrate the issues discussed.  相似文献   

20.
In time-series analysis, a model is rarely pre-specified but rather is typically formulated in an iterative, interactive way using the given time-series data. Unfortunately the properties of the fitted model, and the forecasts from it, are generally calculated as if the model were known in the first place. This is theoretically incorrect, as least squares theory, for example, does not apply when the same data are used to formulates and fit a model. Ignoring prior model selection leads to biases, not only in estimates of model parameters but also in the subsequent construction of prediction intervals. The latter are typically too narrow, partly because they do not allow for model uncertainty. Empirical results also suggest that more complicated models tend to give a better fit but poorer ex-ante forecasts. The reasons behind these phenomena are reviewed. When comparing different forecasting models, the BIC is preferred to the AIC for identifying a model on the basis of within-sample fit, but out-of-sample forecasting accuracy provides the real test. Alternative approaches to forecasting, which avoid conditioning on a single model, include Bayesian model averaging and using a forecasting method which is not model-based but which is designed to be adaptable and robust.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号