首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 375 毫秒
1.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
Using a structural time‐series model, the forecasting accuracy of a wide range of macroeconomic variables is investigated. Specifically of importance is whether the Henderson moving‐average procedure distorts the underlying time‐series properties of the data for forecasting purposes. Given the weight of attention in the literature to the seasonal adjustment process used by various statistical agencies, this study hopes to address the dearth of literature on ‘trending’ procedures. Forecasts using both the trended and untrended series are generated. The forecasts are then made comparable by ‘detrending’ the trended forecasts, and comparing both series to the realised values. Forecasting accuracy is measured by a suite of common methods, and a test of significance of difference is applied to the respective root mean square errors. It is found that the Henderson procedure does not lead to deterioration in forecasting accuracy in Australian macroeconomic variables on most occasions, though the conclusions are very different between the one‐step‐ahead and multi‐step‐ahead forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
This article introduces a novel framework for analysing long‐horizon forecasting of the near non‐stationary AR(1) model. Using the local to unity specification of the autoregressive parameter, I derive the asymptotic distributions of long‐horizon forecast errors both for the unrestricted AR(1), estimated using an ordinary least squares (OLS) regression, and for the random walk (RW). I then identify functions, relating local to unity ‘drift’ to forecast horizon, such that OLS and RW forecasts share the same expected square error. OLS forecasts are preferred on one side of these ‘forecasting thresholds’, while RW forecasts are preferred on the other. In addition to explaining the relative performance of forecasts from these two models, these thresholds prove useful in developing model selection criteria that help a forecaster reduce error. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
In practical econometric forecasting exercises, incomplete data on current and immediate past values of endogenous variables are available. This paper considers various approaches to this ‘ragged edge’ problem, including the common device of treating as ‘temporarily exogenous’ an endogenous variable whose value is known, by deleting it from the set of endogenous variables for whose forecast values the model is solved and suppressing the corresponding structural equation. It is seen that this forecast can be adjusted to coincide with the optimal forecast. The initial discussion concerns the textbook linear simultaneous equation model; extensions to non-linear dynamic models are described.  相似文献   

5.
In order to provide short‐run forecasts of headline and core HICP inflation for France, we assess the forecasting performance of a large set of economic indicators, individually and jointly, as well as using dynamic factor models. We run out‐of‐sample forecasts implementing the Stock and Watson (1999) methodology. We find that, according to usual statistical criteria, the combination of several indicators—in particular those derived from surveys—provides better results than factor models, even after pre‐selection of the variables included in the panel. However, factors included in VAR models exhibit more stable forecasting performance over time. Results for the HICP excluding unprocessed food and energy are very encouraging. Moreover, we show that the aggregation of forecasts on subcomponents exhibits the best performance for projecting total inflation and that it is robust to data snooping. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

6.
This paper considers univariate and multivariate models to forecast monthly conflict events in the Sudan over the out‐of‐sample period 2009–2012. The models used to generate these forecasts were based on a specification from a machine learning algorithm fit to 2000–2008 monthly data. The model that includes previous month's wheat price performs better than a similar model which does not include past wheat prices (the univariate model). Both models did not perform well in forecasting conflict in a neighborhood of the 2012 ‘Heglig crisis’. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Volatility plays a key role in asset and portfolio management and derivatives pricing. As such, accurate measures and good forecasts of volatility are crucial for the implementation and evaluation of asset and derivative pricing models in addition to trading and hedging strategies. However, whilst GARCH models are able to capture the observed clustering effect in asset price volatility in‐sample, they appear to provide relatively poor out‐of‐sample forecasts. Recent research has suggested that this relative failure of GARCH models arises not from a failure of the model but a failure to specify correctly the ‘true volatility’ measure against which forecasting performance is measured. It is argued that the standard approach of using ex post daily squared returns as the measure of ‘true volatility’ includes a large noisy component. An alternative measure for ‘true volatility’ has therefore been suggested, based upon the cumulative squared returns from intra‐day data. This paper implements that technique and reports that, in a dataset of 17 daily exchange rate series, the GARCH model outperforms smoothing and moving average techniques which have been previously identified as providing superior volatility forecasts. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

8.
This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short‐term interest rates from October 2008. Out‐of‐sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson–Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium‐ to longer‐term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near‐zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson–Siegel models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
This paper presents a comparative analysis of the sources of error in forecasts for the UK economy published over a recent four-year period by four independent groups. This analysis rests on the archiving at the ESRC Macroeconomic Modelling Bureau of the original forecasts together with all their accompanying assumptions and adjustments. A method of decomposing observed forecast errors so as to distinguish the contributions of forecaster and model is set out; the impact of future expectations treated in a ‘model-consistent’ or ‘rational’ manner is specifically considered. The results show that the forecaster's adjustments make a substantial contribution to forecast performance, a good part of which comes from adjustments that bring the model on track at the start of the forecast period. The published ex-ante forecasts are usually superior to pure model-based ex-post forecasts, whose performance indicates some misspecification of the underlying models.  相似文献   

10.
Forecast combination based on a model selection approach is discussed and evaluated. In addition, a combination approach based on ex ante predictive ability is outlined. The model selection approach which we examine is based on the use of Schwarz (SIC) or the Akaike (AIC) Information Criteria. Monte Carlo experiments based on combination forecasts constructed using possibly (misspecified) models suggest that the SIC offers a potentially useful combination approach, and that further investigation is warranted. For example, combination forecasts from a simple averaging approach MSE‐dominate SIC combination forecasts less than 25% of the time in most cases, while other ‘standard’ combination approaches fare even worse. Alternative combination approaches are also compared by conducting forecasting experiments using nine US macroeconomic variables. In particular, artificial neural networks (ANN), linear models, and professional forecasts are used to form real‐time forecasts of the variables, and it is shown via a series of experiments that SIC, t‐statistic, and averaging combination approaches dominate various other combination approaches. An additional finding is that while ANN models may not MSE‐dominate simpler linear models, combinations of forecasts from these two models outperform either individual forecast, for a subset of the economic variables examined. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

11.
The track record of a 20‐year history of density forecasts of state tax revenue in Iowa is studied, and potential improvements sought through a search for better‐performing ‘priors’ similar to that conducted three decades ago for point forecasts by Doan, Litterman and Sims (Econometric Reviews, 1984). Comparisons of the point and density forecasts produced under the flat prior are made to those produced by the traditional (mixed estimation) ‘Bayesian VAR’ methods of Doan, Litterman and Sims, as well as to fully Bayesian ‘Minnesota Prior’ forecasts. The actual record and, to a somewhat lesser extent, the record of the alternative procedures studied in pseudo‐real‐time forecasting experiments, share a characteristic: subsequently realized revenues are in the lower tails of the predicted distributions ‘too often’. An alternative empirically based prior is found by working directly on the probability distribution for the vector autoregression parameters—the goal being to discover a better‐performing entropically tilted prior that minimizes out‐of‐sample mean squared error subject to a Kullback–Leibler divergence constraint that the new prior not differ ‘too much’ from the original. We also study the closely related topic of robust prediction appropriate for situations of ambiguity. Robust ‘priors’ are competitive in out‐of‐sample forecasting; despite the freedom afforded the entropically tilted prior, it does not perform better than the simple alternatives. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Clemen's (1989) review of the forecast-combining literature amply illustrates both the interest in and the importance of this subject. This article stresses the tautological properties of various consensus measures that assure their success relative to most individual forecasts. It confirms the finding of earlier studies that for each specific macroeconomic variable roughly one-third of individual forecasters are more accurate than a consensus. However, each individual does relatively poorly for some variable while the consensus, in contrast, necessarily never fails relative to most individuals. These results, like most previous studies, describe consensus measures that are synthetic constructs derived from a pre-existing set of individual forecasts. Strictly speaking, this contemporaneous consensus is not available to individual forecasters when their forecasts are made. A prior consensus measure, which is in their information sets, was relatively much less accurate than the contemporaneous measure. Nevertheless, a small subset of individual forecasters were generally inferior to the known, prior consensus forecast.  相似文献   

13.
Although‘La Prospective’is not well known in the Anglo-Saxon forecasting literature, it has been for many years widely used in France and other Latin countries with considerable success. Lately, because of the inaccuracy of forecasting and the large forecasting errors that have been experienced, it is suggested that the Prospective approach can be used as a way of dealing with these problems. The main characteristics of‘La Prospective’are that it does not look at the future as a continuation of the past but rather as the outcome of the wishes of various actors and the constraints imposed on them by the environment. Its purpose is to assist in creating alternative futures and then select some alternative that allows for maximum freedom of action.  相似文献   

14.
The period of extraordinary volatility in euro area headline inflation starting in 2007 raised the question whether forecast combination methods can be used to hedge against bad forecast performance of single models during such periods and provide more robust forecasts. We investigate this issue for forecasts from a range of short‐term forecasting models. Our analysis shows that there is considerable variation of the relative performance of the different models over time. To take that into account we suggest employing performance‐based forecast combination methods—in particular, one with more weight on the recent forecast performance. We compare such an approach with equal forecast combination that has been found to outperform more sophisticated forecast combination methods in the past, and investigate whether it can improve forecast accuracy over the single best model. The time‐varying weights assign weights to the economic interpretations of the forecast stemming from different models. We also include a number of benchmark models in our analysis. The combination methods are evaluated for HICP headline inflation and HICP excluding food and energy. We investigate how forecast accuracy of the combination methods differs between pre‐crisis times, the period after the global financial crisis and the full evaluation period, including the global financial crisis with its extraordinary volatility in inflation. Overall, we find that forecast combination helps hedge against bad forecast performance and that performance‐based weighting outperforms simple averaging. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

15.
This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven‐variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non‐stationary, stationary and error‐correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non‐stationary specification outperformed those of the stationary and error‐correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error‐correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

16.
This study compares the performance of two forecasting models of the 10‐year Treasury rate: a random walk (RW) model and an augmented‐autoregressive (A‐A) model which utilizes the information in the expected inflation rate. For 1993–2008, the RW and A‐A forecasts (with different lead times and forecast horizons) are generally unbiased and accurately predict directional change under symmetric loss. However, the A‐A forecasts outperform the RW, suggesting that the expected inflation rate (as a leading indicator) helps improve forecast accuracy. This finding is important since bond market efficiency implies that the RW forecasts are optimal and cannot be improved. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
In recent years there has been a considerable development in modelling non‐linearities and asymmetries in economic and financial variables. The aim of the current paper is to compare the forecasting performance of different models for the returns of three of the most traded exchange rates in terms of the US dollar, namely the French franc (FF/$), the German mark (DM/$) and the Japanese yen (Y/$). The relative performance of non‐linear models of the SETAR, STAR and GARCH types is contrasted with their linear counterparts. The results show that if attention is restricted to mean square forecast errors, the performance of the models, when distinguishable, tends to favour the linear models. The forecast performance of the models is evaluated also conditional on the regime at the forecast origin and on density forecasts. This analysis produces more evidence of forecasting gains from non‐linear models. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper we make an empirical investigation of the relationship between the consistency, coherence and validity of probability judgements in a real-world forecasting context. Our results indicate that these measures of the adequacy of an individual's probability assessments are not closely related as we anticipated. Twenty-nine of our thirty-six subjects were better calibrated in point probabilities than in odds and our subjects were, in general more coherent using point probabilities than odds forecasts. Contrary to our expectations we found very little difference in forecasting response and performance between simple and compound holistic forecasts. This result is evidence against the ‘divide-and-conquer’ rationale underlying most applications of normative decision theory. In addition, our recompositions of marginal and conditional assessments into compound forecasts were no better calibrated or resolved than their holistic counterparts. These findings convey two implications for forecasting. First, untrained judgemental forecasters should use point probabilities in preference to odds. Second, judgemental forecasts of complex compound probabilities may be as well assessed holistically as they are using methods of decomposition and recomposition. In addition, our study provides a paradigm for further studies of the relationship between consistency, coherence and validity in judgemental probability forecasting.  相似文献   

20.
The increase in oil price volatility in recent years has raised the importance of forecasting it accurately for valuing and hedging investments. The paper models and forecasts the crude oil exchange‐traded funds (ETF) volatility index, which has been used in the last years as an important alternative measure to track and analyze the volatility of future oil prices. Analysis of the oil volatility index suggests that it presents features similar to those of the daily market volatility index, such as long memory, which is modeled using well‐known heterogeneous autoregressive (HAR) specifications and new extensions that are based on net and scaled measures of oil price changes. The aim is to improve the forecasting performance of the traditional HAR models by including predictors that capture the impact of oil price changes on the economy. The performance of the new proposals and benchmarks is evaluated with the model confidence set (MCS) and the Generalized‐AutoContouR (G‐ACR) tests in terms of point forecasts and density forecasting, respectively. We find that including the leverage in the conditional mean or variance of the basic HAR model increases its predictive ability. Furthermore, when considering density forecasting, the best models are a conditional heteroskedastic HAR model that includes a scaled measure of oil price changes, and a HAR model with errors following an exponential generalized autoregressive conditional heteroskedasticity specification. In both cases, we consider a flexible distribution for the errors of the conditional heteroskedastic process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号