首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper we examine how BVARs can be used for forecasting cointegrated variables. We propose an approach based on a Bayesian ECM model in which, contrary to the previous literature, the factor loadings are given informative priors. This procedure, applied to Italian macroeconomic series, produces more satisfactory forecasts than different prior specifications or parameterizations. Providing an informative prior on the factor loadings is a crucial point: a flat prior on the ECM terms combined with an informative prior on the lagged endogenous variables coefficients gives too much importance to the long‐run properties with respect to the short‐run dynamics. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

3.
This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven‐variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non‐stationary, stationary and error‐correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non‐stationary specification outperformed those of the stationary and error‐correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error‐correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

4.
In the light of the still topical nature of ‘bananas and petrol’ being blamed for driving much of the inflationary pressures in Australia in recent times, the ‘headline’ and ‘underlying’ rates of inflation are scrutinised in terms of forecasting accuracy. A general structural time‐series modelling strategy is applied to estimate models for alternative types of Consumer Price Index (CPI) measures. From this, out‐of‐sample forecasts are generated from the various models. The underlying forecasts are subsequently adjusted to facilitate comparison. The Ashley, Granger and Schmalensee (1980) test is then performed to determine whether there is a statistically significant difference between the root mean square errors of the models. The results lend weight to the recent findings of Song (2005) that forecasting models using underlying rates are not systematically inferior to those based on the headline rate. In fact, strong evidence is found that underlying measures produce superior forecasts. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
Using a structural time‐series model, the forecasting accuracy of a wide range of macroeconomic variables is investigated. Specifically of importance is whether the Henderson moving‐average procedure distorts the underlying time‐series properties of the data for forecasting purposes. Given the weight of attention in the literature to the seasonal adjustment process used by various statistical agencies, this study hopes to address the dearth of literature on ‘trending’ procedures. Forecasts using both the trended and untrended series are generated. The forecasts are then made comparable by ‘detrending’ the trended forecasts, and comparing both series to the realised values. Forecasting accuracy is measured by a suite of common methods, and a test of significance of difference is applied to the respective root mean square errors. It is found that the Henderson procedure does not lead to deterioration in forecasting accuracy in Australian macroeconomic variables on most occasions, though the conclusions are very different between the one‐step‐ahead and multi‐step‐ahead forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
This article introduces a novel framework for analysing long‐horizon forecasting of the near non‐stationary AR(1) model. Using the local to unity specification of the autoregressive parameter, I derive the asymptotic distributions of long‐horizon forecast errors both for the unrestricted AR(1), estimated using an ordinary least squares (OLS) regression, and for the random walk (RW). I then identify functions, relating local to unity ‘drift’ to forecast horizon, such that OLS and RW forecasts share the same expected square error. OLS forecasts are preferred on one side of these ‘forecasting thresholds’, while RW forecasts are preferred on the other. In addition to explaining the relative performance of forecasts from these two models, these thresholds prove useful in developing model selection criteria that help a forecaster reduce error. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
‘Bayesian forecasting’ is a time series method of forecasting which (in the United Kingdom) has become synonymous with the state space formulation of Harrison and Stevens (1976). The approach is distinct from other time series methods in that it envisages changes in model structure. A disjoint class of models is chosen to encompass the changes. Each data point is retrospectively evaluated (using Bayes theorem) to judge which of the models held. Forecasts are then derived conditional on an assumed model holding true. The final forecasts are weighted sums of these conditional forecasts. Few empirical evaluations have been carried out. This paper reports a large scale comparison of time series forecasting methods including the Bayesian. The approach is two fold: a simulation study to examine parameter sensitivity and an empirical study which contrasts Bayesian with other time series methods.  相似文献   

8.
This paper investigates Bayesian forecasts for some cointegrated time series data. Suppose data are derived from some cointegrated model, but, an unrestricted vector autoregressive model, without including cointegrated conditions, is fitted; the implication of using an incorrect model will be investigated from the Bayesian forecasting viewpoint. For some special cointegrated data and under the diffuse prior assumption, it can be analytically proven that the posterior predictive distributions for both the true model and the fitted model are asymptotically the same for any future step. For a more general cointegrated model, examinations are performed via simulations. Some simulated results reveal that a reasonably unrestricted model will still provide a rather accurate forecast as long as the sample size is large enough or the forecasting period is not too far in the future. For a small sample size or for long‐term forecasting, more accurate forecasts are expected if the correct cointegrated model is actually applied. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
In time-series analysis, a model is rarely pre-specified but rather is typically formulated in an iterative, interactive way using the given time-series data. Unfortunately the properties of the fitted model, and the forecasts from it, are generally calculated as if the model were known in the first place. This is theoretically incorrect, as least squares theory, for example, does not apply when the same data are used to formulates and fit a model. Ignoring prior model selection leads to biases, not only in estimates of model parameters but also in the subsequent construction of prediction intervals. The latter are typically too narrow, partly because they do not allow for model uncertainty. Empirical results also suggest that more complicated models tend to give a better fit but poorer ex-ante forecasts. The reasons behind these phenomena are reviewed. When comparing different forecasting models, the BIC is preferred to the AIC for identifying a model on the basis of within-sample fit, but out-of-sample forecasting accuracy provides the real test. Alternative approaches to forecasting, which avoid conditioning on a single model, include Bayesian model averaging and using a forecasting method which is not model-based but which is designed to be adaptable and robust.  相似文献   

10.
We consider the problem of forecasting a stationary time series when there is an unknown mean break close to the forecast origin. Based on the intercept‐correction methods suggested by Clements and Hendry (1998) and Bewley (2003), a hybrid approach is introduced, where the break and break point are treated in a Bayesian fashion. The hyperparameters of the priors are determined by maximizing the marginal density of the data. The distributions of the proposed forecasts are derived. Different intercept‐correction methods are compared using simulation experiments. Our hybrid approach compares favorably with both the uncorrected and the intercept‐corrected forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
A Bayesian procedure for forecasting S‐shaped growth is introduced and compared to classical methods of estimation and prediction using three variants of the logistic functional form and annual times series of the diffusion of music compact discs in twelve countries. The Bayesian procedure was found not only to improve forecast accuracy, using the medians of the predictive densities as point forecasts, but also to produce intervals with a width and asymmetry more in accord with the outcomes than intervals from the classical alternative. While the analysis in this paper focuses on logistic growth, the problem is set up so that the methods are transportable to other characterizations of the growth process. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

12.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
Volatility plays a key role in asset and portfolio management and derivatives pricing. As such, accurate measures and good forecasts of volatility are crucial for the implementation and evaluation of asset and derivative pricing models in addition to trading and hedging strategies. However, whilst GARCH models are able to capture the observed clustering effect in asset price volatility in‐sample, they appear to provide relatively poor out‐of‐sample forecasts. Recent research has suggested that this relative failure of GARCH models arises not from a failure of the model but a failure to specify correctly the ‘true volatility’ measure against which forecasting performance is measured. It is argued that the standard approach of using ex post daily squared returns as the measure of ‘true volatility’ includes a large noisy component. An alternative measure for ‘true volatility’ has therefore been suggested, based upon the cumulative squared returns from intra‐day data. This paper implements that technique and reports that, in a dataset of 17 daily exchange rate series, the GARCH model outperforms smoothing and moving average techniques which have been previously identified as providing superior volatility forecasts. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

14.
We propose a simple and flexible framework for forecasting the joint density of asset returns. The multinormal distribution is augmented with a polynomial in (time‐varying) non‐central co‐moments of assets. We estimate the coefficients of the polynomial via the method of moments for a carefully selected set of co‐moments. In an extensive empirical study, we compare the proposed model with a range of other models widely used in the literature. Employing a recently proposed as well as standard techniques to evaluate multivariate forecasts, we conclude that the augmented joint density provides highly accurate forecasts of the ‘negative tail’ of the joint distribution. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
We introduce a new strategy for the prediction of linear temporal aggregates; we call it ‘hybrid’ and study its performance using asymptotic theory. This scheme consists of carrying out model parameter estimation with data sampled at the highest available frequency and the subsequent prediction with data and models aggregated according to the forecasting horizon of interest. We develop explicit expressions that approximately quantify the mean square forecasting errors associated with the different prediction schemes and that take into account the estimation error component. These approximate estimates indicate that the hybrid forecasting scheme tends to outperform the so‐called ‘all‐aggregated’ approach and, in some instances, the ‘all‐disaggregated’ strategy that is known to be optimal when model selection and estimation errors are neglected. Unlike other related approximate formulas existing in the literature, those proposed in this paper are totally explicit and require neither assumptions on the second‐order stationarity of the sample nor Monte Carlo simulations for their evaluation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
This article discusses the use of Bayesian methods for inference and forecasting in dynamic term structure models through integrated nested Laplace approximations (INLA). This method of analytical approximation allows accurate inferences for latent factors, parameters and forecasts in dynamic models with reduced computational cost. In the estimation of dynamic term structure models it also avoids some simplifications in the inference procedures, such as the inefficient two‐step ordinary least squares (OLS) estimation. The results obtained in the estimation of the dynamic Nelson–Siegel model indicate that this method performs more accurate out‐of‐sample forecasts compared to the methods of two‐stage estimation by OLS and also Bayesian estimation methods using Markov chain Monte Carlo (MCMC). These analytical approaches also allow efficient calculation of measures of model selection such as generalized cross‐validation and marginal likelihood, which may be computationally prohibitive in MCMC estimations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
This paper introduces a Bayesian forecasting model that accommodates innovative outliers. The hierarchical specification of prior distributions allows an identification of observations contaminated by these outliers and endogenously determines the hyperparameters of the Minnesota prior. Estimation and prediction are performed using Markov chain Monte Carlo (MCMC) methods. The model forecasts the Hong Kong economy more accurately than the standard V AR and performs in line with other complicated BV AR models. It is also shown that the model is capable of finding most of the outliers in various simulation experiments. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

18.
The specification choices of vector autoregressions (VARs) in forecasting are often not straightforward, as they are complicated by various factors. To deal with model uncertainty and better utilize multiple VARs, this paper adopts the dynamic model averaging/selection (DMA/DMS) algorithm, in which forecasting models are updated and switch over time in a Bayesian manner. In an empirical application to a pool of Bayesian VAR (BVAR) models whose specifications include level and difference, along with differing lag lengths, we demonstrate that specification‐switching VARs are flexible and powerful forecast tools that yield good performance. In particular, they beat the overall best BVAR in most cases and are comparable to or better than the individual best models (for each combination of variable, forecast horizon, and evaluation metrics) for medium‐ and long‐horizon forecasts. We also examine several extensions in which forecast model pools consist of additional individual models in partial differences as well as all level/difference models, and/or time variations in VAR innovations are allowed, and discuss the potential advantages and disadvantages of such specification choices. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
This paper proposes new methods for ‘targeting’ factors estimated from a big dataset. We suggest that forecasts of economic variables can be improved by tuning factor estimates: (i) so that they are both more relevant for a specific target variable; and (ii) so that variables with considerable idiosyncratic noise are down‐weighted prior to factor estimation. Existing targeted factor methodologies are limited to estimating the factors with only one of these two objectives in mind. We therefore combine these ideas by providing new weighted principal components analysis (PCA) procedures and a targeted generalized PCA (TGPCA) procedure. These methods offer a flexible combination of both types of targeting that is new to the literature. We illustrate this empirically by forecasting a range of US macroeconomic variables, finding that our combined approach yields important improvements over competing methods, consistently surviving elimination in the model confidence set procedure. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper we forecast daily returns of crypto‐currencies using a wide variety of different econometric models. To capture salient features commonly observed in financial time series like rapid changes in the conditional variance, non‐normality of the measurement errors and sharply increasing trends, we develop a time‐varying parameter VAR with t‐distributed measurement errors and stochastic volatility. To control for overparametrization, we rely on the Bayesian literature on shrinkage priors, which enables us to shrink coefficients associated with irrelevant predictors and/or perform model specification in a flexible manner. Using around one year of daily data, we perform a real‐time forecasting exercise and investigate whether any of the proposed models is able to outperform the naive random walk benchmark. To assess the economic relevance of the forecasting gains produced by the proposed models we, moreover, run a simple trading exercise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号