首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
This paper examines a strategy for structuring one type of domain knowledge for use in extrapolation. It does so by representing information about causality and using this domain knowledge to select and combine forecasts. We use five categories to express causal impacts upon trends: growth, decay, supporting, opposing, and regressing. An identification of causal forces aided in the determination of weights for combining extrapolation forecasts. These weights improved average ex ante forecast accuracy when tested on 104 annual economic and demographic time series. Gains in accuracy were greatest when (1) the causal forces were clearly specified and (2) stronger causal effects were expected, as in longer-range forecasts. One rule suggested by this analysis was: ‘Do not extrapolate trends if they are contrary to causal forces.’ We tested this rule by comparing forecasts from a method that implicitly assumes supporting trends (Holt's exponential smoothing) with forecasts from the random walk. Use of the rule improved accuracy for 20 series where the trends were contrary; the MdAPE (Median Absolute Percentage Error) was 18% less for the random walk on 20 one-year ahead forecasts and 40% less for 20 six-year-ahead forecasts. We then applied the rule to four other data sets. Here, the MdAPE for the random walk forecasts was 17% less than Holt's error for 943 short-range forecasts and 43% less for 723 long-range forecasts. Our study suggests that the causal assumptions implicit in traditional extrapolation methods are inappropriate for many applications.  相似文献   

2.
The paper examines combined forecasts based on two components: forecasts produced by Chase Econometrics and those produced using the Box-Jenkins ARIMA technique. Six series of quarterly ex ante and simulated ex ante forecasts are used over 37 time periods and ten horizons. The forecasts are combined using seven different methods. The best combined forecasts, judged by average relative root-mean-square error, are superior to the Chase forecasts for three variables and inferior for two, though averaged over all six variables the Chase forecasts are slightly better. A two-step procedure produces forecasts for the last half of the sample which, on average, are slightly better than the Chase forecasts.  相似文献   

3.
Forecasts from quarterly econometric models are typically revised on a monthly basis to reflect the information in current economic data. The revision process usually involves setting targets for the quarterly values of endogenous variables for which monthly observations are available and then altering the intercept terms in the quarterly forecasting model to achieve the target values. A formal statistical approach to the use of monthly data to update quarterly forecasts is described and the procedure is applied to the Michigan Quarterly Econometric Model of the US Economy. The procedure is evaluated in terms of both ex post and ex ante forecasting performance. The ex ante results for 1986 and 1987 indicate that the method is quite promising. With a few notable exceptions, the formal procedure produces forecasts of GNP growth that are very close to the published ex ante forecasts.  相似文献   

4.
This study investigates possible improvements in medium-term VAR forecasting of state retail sales and personal income when the two series are co-integrated and represent an error-correction system. For each of North Carolina and New York, three regional vector autoregression (VAR) models are specified; an unrestricted two-equation model consisting of the two state variables, a five-equation unrestricted model with three national variables added and a Bayesian (BVAR) version of the second model. For each state, the co-integration and error-correction relationship of the two state variables is verified and an error-correction version of each model specified. Twelve successive ex ante five-year forecasts are then generated for each of the state models. The results show that including an error-correction mechanism when statistically significant improves medium-term forecasting accuracy in every case.  相似文献   

5.
Forecasts for the seven major industrial countries, Canada, France, Germany, Italy, Japan, the United Kingdom and the United States, are published on a regular basis in the OECD's Economic Outlook. This paper analyses the accuracy of the OECD annual forecasts of output and price changes and of the current balance in the balance of payments. As a reference basis, the forecasts are compared with those generated by a naive model, a random walk process. The measures of forecasting accuracy used are the mean-absolute error, the root-mean-square error, the median-absolute error, and Theil's inequality coefficient. The OECD forecasts of real GNP changes are significantly superior to those generated by the random walk process; however, the OECD price and current balance forecasts are not significantly more accurate than those obtained from the naive model. The OECD's forecasting performance has neither improved nor deteriorated over time.  相似文献   

6.
This paper examines the effects of combining three econometric and three times-series forecasts of growth and inflation in the U.K. If forecasts are unbiased then a combination exploiting this fact will be more efficient than an unrestricted combination. Ex post econometric forecasts may be biased but ex ante they are unbiased. The results of the study are that a restricted linear combination of the econometric forecasts is superior to an unrestricted combination and also to the unweighted mean of the forecasts. However, it is not preferred to the best of the individual forecasts.  相似文献   

7.
System-based combination weights for series r/step-length h incorporate relative accuracy information from other forecast step-lengths for r and from other series for step-length h. Such weights are examined utilizing the West and Fullerton (1996) data set-4275 ex ante employment forecasts from structural simultaneous equation econometric models for 19 metropolitan areas at 10 quarterly step-lengths and a parallel set of 4275 ARIMA forecasts. The system-based weights yielded combined forecasts of higher average accuracy and lower risk of large inaccuracy than seven alternative strategies: (1) averaging; (2) relative MSE weights; (3) outperformance (per cent best) weights; (4) Bates and Granger (1969) optimal weights with a convexity constraint imposed; (5) unconstrained optimal weights; (6) select a ‘best’ method (ex ante) by series and; (7) experiment in the Bischoff (1989) sense and select either method (2) or (6) based on the outcome of e experiment. Accuracy gains of the system-based combination were concentrated at step-lengths two to five. Although alternative (5) was generally outperformed, none of the six other alternatives was systematically most accurate when evaluated relative to each other. This contrasts with Bischoff's (1989) results that held promise for an empirically applicable guideline to determine whether or not to combine.  相似文献   

8.
A methodology for estimating high‐frequency values of an unobserved multivariate time series from low‐frequency values of and related information to it is presented in this paper. This is an optimal solution, in the multivariate setting, to the problem of ex post prediction, disaggregation, benchmarking or signal extraction of an unobservable stochastic process. Also, the problem of extrapolation or ex ante prediction is optimally solved and, in this context, statistical tests are developed for checking online the ocurrence of extreme values of the unobserved time series and consistency of future benchmarks with the present and past observed information. The procedure is based on structural or unobserved component models, whose assumptions and specification are validated with the data alone. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

9.
This paper compares the forecasts of recession and recovery made by five non-government U.K. teams modelling the economy (Cambridge Econometrics, the London Business School, the National Institute of Economic and Social Research, the Cambridge Economic Policy Group and the Liverpool Research Group). The paper concentrates on annual ex ante projections as published over the period 1978-1982, i.e. forecasts made, before the event, of the onset, length, depth and character of the economic recession in the U.K. which began in 1979. The comparison is in terms of year by year changes in production, unemployment, prices and other variables. It concludes that no group was systematically better or worse than other groups (confirming U.S. experience) and that the groups tended to perform better in their chosen areas of specialization, e.g. medium-term groups did better at forecasting the medium-term outcome.  相似文献   

10.
This study addresses for the first time systematic evaluation of a widely used class of forecasts, regional economic forecasts. Ex ante regional structural equation model forecasts are analysed for 19 metropolitan areas. One- to ten-quarter-ahead forecasts are considered and the seven-year sample spans a complete business cycle. Counter to previous speculation in the literature, (1) dependency on macroeconomic forecasting model inputs does not substantially erode accuracy relative to univariate extrapolative methodologies and (2) stochastic time series models do not on average, yield more accurate regional economic predictions than structural models. Similar to findings in other studies, clear preferences among extrapolative methodologies do not emerge. Most general conclusions, however, are subject to caveats based on step-length effects and region-specific effects.  相似文献   

11.
This paper presents an autoregressive fractionally integrated moving‐average (ARFIMA) model of nominal exchange rates and compares its forecasting capability with the monetary structural models and the random walk model. Monthly observations are used for Canada, France, Germany, Italy, Japan and the United Kingdom for the period of April 1973 through December 1998. The estimation method is Sowell's (1992) exact maximum likelihood estimation. The forecasting accuracy of the long‐memory model is formally compared to the random walk and the monetary models, using the recently developed Harvey, Leybourne and Newbold (1997) test statistics. The results show that the long‐memory model is more efficient than the random walk model in steps‐ahead forecasts beyond 1 month for most currencies and more efficient than the monetary models in multi‐step‐ahead forecasts. This new finding strongly suggests that the long‐memory model of nominal exchange rates be studied as a viable alternative to the conventional models. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
Forecast combination based on a model selection approach is discussed and evaluated. In addition, a combination approach based on ex ante predictive ability is outlined. The model selection approach which we examine is based on the use of Schwarz (SIC) or the Akaike (AIC) Information Criteria. Monte Carlo experiments based on combination forecasts constructed using possibly (misspecified) models suggest that the SIC offers a potentially useful combination approach, and that further investigation is warranted. For example, combination forecasts from a simple averaging approach MSE‐dominate SIC combination forecasts less than 25% of the time in most cases, while other ‘standard’ combination approaches fare even worse. Alternative combination approaches are also compared by conducting forecasting experiments using nine US macroeconomic variables. In particular, artificial neural networks (ANN), linear models, and professional forecasts are used to form real‐time forecasts of the variables, and it is shown via a series of experiments that SIC, t‐statistic, and averaging combination approaches dominate various other combination approaches. An additional finding is that while ANN models may not MSE‐dominate simpler linear models, combinations of forecasts from these two models outperform either individual forecast, for a subset of the economic variables examined. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

13.
In this study the interaction of forecasting method (econometric versus exponential smoothing) and two situational factors are evaluated for their effects upon accuracy. Data from two independent sets of ex ante quarterly forecasts for 19 classes of mail were used to test hypotheses. Counter to expectations, the findings revealed that forecasting method did not interact with the forecast time horizon (short versus long term). However, as hypothesized, forecasting method interacted significantly with product/market definition (First Class versus other mail), an indicator of buyer sensitivity to marketing/environmental changes. Results are discussed in the context of future research on forecast accuracy.  相似文献   

14.
This paper proposes an adjustment of linear autoregressive conditional mean forecasts that exploits the predictive content of uncorrelated model residuals. The adjustment is motivated by non‐Gaussian characteristics of model residuals, and implemented in a semiparametric fashion by means of conditional moments of simulated bivariate distributions. A pseudo ex ante forecasting comparison is conducted for a set of 494 macroeconomic time series recently collected by Dees et al. (Journal of Applied Econometrics 2007; 22: 1–38). In total, 10,374 time series realizations are contrasted against competing short‐, medium‐ and longer‐term purely autoregressive and adjusted predictors. With regard to all forecast horizons, the adjusted predictions consistently outperform conditionally Gaussian forecasts according to cross‐sectional mean group evaluation of absolute forecast errors and directional accuracy. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
16.
There exists theoretical and empirical evidence on the efficiency and robustness of Non-negativity Restricted Least Squares combinations of forecasts. However, the computational complexity of the method hinders its widespread use in practice. We examine various optimizing and heuristic computational algorithms for estimating NRLS combination models and provide certain CPU-time reducing implementations. We empirically compare the combination weights identified by the alternative algorithms and their computational demands based on a total of more than 66,000 models estimated to combine the forecasts of 37 firm-specific accounting earnings series. The ex ante prediction accuracies of combined forecasts from the optimizing versus heuristic algorithms are compared. The effects of fit sample size, model specification, multicollinearity, correlations of forecast errors, and series and forecast variances on the relative accuracy of the optimizing versus heuristic algorithms are analysed. The results reveal that, in general, the computationally simple heuristic algorithms perform as well as the optimizing algorithms. No generalizable conclusions could be reached, however, about which algorithm should be used based on series and forecast characteristics. © 1997 John Wiley & Sons, Ltd.  相似文献   

17.
Using the method of ARIMA forecasting with benchmarks developed in this paper, it is possible to obtain forecasts which take into account the historical information of a series, captured by an ARIMA model (Box and Jenkins, 1970), as well as partial prior information about the forecasts. Prior information takes the form of benchmarks. These originate from the advice of experts, from forecasts of an annual econometric model or simply from pessimistic, realistic or optimistic scenarios contemplated by the analyst of the current economic situation. The benchmarks may represent annual levels to be achieved, neighbourhoods to be reached for a given time period, movements to be displayed or more generally any linear criteria to be satisfied by the forecasted values. The forecaster may then exercise his current economic evaluation and judgement to the fullest extent in deriving forecasts, since the laboriousness experienced without a systematic method is avoided.  相似文献   

18.
In this paper we suggest a framework to assess the degree of reliability of provisional estimates as forecasts of final data, and we re‐examine the question of the most appropriate way in which available data should be used for ex ante forecasting in the presence of a data‐revision process. Various desirable properties for provisional data are suggested, as well as procedures for testing them, taking into account the possible non‐stationarity of economic variables. For illustration, the methodology is applied to assess the quality of the US M1 data production process and to derive a conditional model whose performance in forecasting is then tested against other alternatives based on simple transformations of provisional data or of past final data. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we forecast real house price growth of 16 OECD countries using information from domestic macroeconomic indicators and global measures of the housing market. Consistent with the findings for the US housing market, we find that the forecasts from an autoregressive model dominate the forecasts from the random walk model for most of the countries in our sample. More importantly, we find that the forecasts from a bivariate model that includes economically important domestic macroeconomic variables and two global indicators of the housing market significantly improve upon the univariate autoregressive model forecasts. Among all the variables, the mean square forecast error from the model with the country's domestic interest rates has the best performance for most of the countries. The country's income, industrial production, and stock markets are also found to have valuable information about the future movements in real house price growth. There is also some evidence supporting the influence of the global housing price growth in out‐of‐sample forecasting of real house price growth in these OECD countries.  相似文献   

20.
The use of correlation between forecasts and actual returns is commonplace in the literature, often used as a measurement of investors' skill. A prominent application of this is the concept of the information coefficient (IC). Not only can the IC be used as a tool to rate analysts and fund managers but it also represents an important parameter in the asset allocation and portfolio construction process. Nevertheless, a theoretical understanding of it has typically been limited to the partial equilibrium context where the investing activities of each agent have no effect on other market participants. In this paper we show that this can be an undesirable oversimplification and we demonstrate plausible circumstances in which conventional empirical measurements of IC can be highly misleading. We suggest that improved understanding of IC in a general equilibrium setting can lead to refined portfolio decision making ex ante and more informative analysis of performance ex post. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号