首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For leverage heterogeneous autoregressive (LHAR) models with jumps and other covariates, called LHARX models, multistep forecasts are derived. Some optimal properties of forecasts in terms of conditional volatilities are discussed, which tells us to model conditional volatility for return but not for the LHARX regression error and other covariates. Forecast standard errors are constructed for which we need to model conditional volatilities both for return and for LHAR regression error and other blue covariates. The proposed methods are well illustrated by forecast analysis for the realized volatilities of the US stock price indexes: the S&P 500, the NASDAQ, the DJIA, and the RUSSELL indexes.  相似文献   

2.
This paper uses high‐frequency continuous intraday electricity price data from the EPEX market to estimate and forecast realized volatility. Three different jump tests are used to break down the variation into jump and continuous components using quadratic variation theory. Several heterogeneous autoregressive models are then estimated for the logarithmic and standard deviation transformations. Generalized autoregressive conditional heteroskedasticity (GARCH) structures are included in the error terms of the models when evidence of conditional heteroskedasticity is found. Model selection is based on various out‐of‐sample criteria. Results show that decomposition of realized volatility is important for forecasting and that the decision whether to include GARCH‐type innovations might depend on the transformation selected. Finally, results are sensitive to the jump test used in the case of the standard deviation transformation.  相似文献   

3.
This paper is concerned with model averaging estimation for conditional volatility models. Given a set of candidate models with different functional forms, we propose a model averaging estimator and forecast for conditional volatility, and construct the corresponding weight-choosing criterion. Under some regulatory conditions, we show that the weight selected by the criterion asymptotically minimizes the true Kullback–Leibler divergence, which is the distributional approximation error, as well as the Itakura–Saito distance, which is the distance between the true and estimated or forecast conditional volatility. Monte Carlo experiments support our newly proposed method. As for the empirical applications of our method, we investigate a total of nine major stock market indices and make a 1-day-ahead volatility forecast for each data set. Empirical results show that the model averaging forecast achieves the highest accuracy in terms of all types of loss functions in most cases, which captures the movement of the unknown true conditional volatility.  相似文献   

4.
It is well understood that the standard formulation for the variance of a regression‐model forecast produces interval estimates that are too narrow, principally because it ignores regressor forecast error. While the theoretical problem has been addressed, there has not been an adequate explanation of the effect of regressor forecast error, and the empirical literature has supplied a disparate variety of bits and pieces of evidence. Most business‐forecasting software programs continue to supply only the standard formulation. This paper extends existing analysis to derive and evaluate large‐sample approximations for the forecast error variance in a single‐equation regression model. We show how these approximations substantially clarify the expected effects of regressor forecast error. We then present a case study, which (a) demonstrates how rolling out‐of‐sample evaluations can be applied to obtain empirical estimates of the forecast error variance, (b) shows that these estimates are consistent with our large‐sample approximations and (c) illustrates, for ‘typical’ data, how seriously the standard formulation can understate the forecast error variance. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

5.
We investigate the forecast performance of the fractionally integrated error correction model against several competing models for the prediction of the Nikkei stock average index. The competing models include the martingale model, the vector autoregressive model and the conventional error correction model. We consider models with and without conditional heteroscedasticity. For forecast horizons of over twenty days, the best forecasting performance is obtained for the model when fractional cointegration is combined with conditional heteroscedasticity. Our results reinforce the notion that cointegration and fractional cointegration are important for long‐horizon prediction. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

6.
This paper investigates the effects of imposing invalid cointegration restrictions or ignoring valid ones on the estimation, testing and forecasting properties of the bivariate, first‐order, vector autoregressive (VAR(1)) model. We first consider nearly cointegrated VARs, that is, stable systems whose largest root, lmax, lies in the neighborhood of unity, while the other root, lmin, is safely smaller than unity. In this context, we define the ‘forecast cost of type I’ to be the deterioration in the forecasting accuracy of the VAR model due to the imposition of invalid cointegration restrictions. However, there are cases where misspecification arises for the opposite reasons, namely from ignoring cointegration when the true process is, in fact, cointegrated. Such cases can arise when lmax equals unity and lmin is less than but near to unity. The effects of this type of misspecification on forecasting will be referred to as ‘forecast cost of type II’. By means of Monte Carlo simulations, we measure both types of forecast cost in actual situations, where the researcher is led (or misled) by the usual unit root tests in choosing the unit root structure of the system. We consider VAR(1) processes driven by i.i.d. Gaussian or GARCH innovations. To distinguish between the effects of nonlinear dependence and those of leptokurtosis, we also consider processes driven by i.i.d. t(2) innovations. The simulation results reveal that the forecast cost of imposing invalid cointegration restrictions is substantial, especially for small samples. On the other hand, the forecast cost of ignoring valid cointegration restrictions is small but not negligible. In all the cases considered, both types of forecast cost increase with the intensity of GARCH effects. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

8.
It has been suggested that a major problem for window selection when we estimate models for forecasting is to empirically determine the timing of the break. However, if the window choice between post‐break or full sample is based on mean square forecast error ratios, it is difficult to understand why such a problem arises since break detectability and these ratios seem to have the same determinants. This paper analyses this issue first for the expected values in conditional models and then by Monte Carlo simulations for more general cases. Results show similar behaviour between rejection frequencies and the ratios but only for break tests that do not take into account forecasting error covariances, as is the case with mean square forecast error measures. Moreover, the asymmetric shape of the frequency distribution of the ratios could help us to better grasp empirical problems. An illustration using actual data is given. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
Most long memory forecasting studies assume that long memory is generated by the fractional difference operator. We argue that the most cited theoretical arguments for the presence of long memory do not imply the fractional difference operator and assess the performance of the autoregressive fractionally integrated moving average (ARFIMA) model when forecasting series with long memory generated by nonfractional models. We find that ARFIMA models dominate in forecast performance regardless of the long memory generating mechanism and forecast horizon. Nonetheless, forecasting uncertainty at the shortest forecast horizon could make short memory models provide suitable forecast performance, particularly for smaller degrees of memory. Additionally, we analyze the forecasting performance of the heterogeneous autoregressive (HAR) model, which imposes restrictions on high-order AR models. We find that the structure imposed by the HAR model produces better short and medium horizon forecasts than unconstrained AR models of the same order. Our results have implications for, among others, climate econometrics and financial econometrics models dealing with long memory series at different forecast horizons.  相似文献   

10.
The linear multiregression dynamic model (LMDM) is a Bayesian dynamic model which preserves any conditional independence and causal structure across a multivariate time series. The conditional independence structure is used to model the multivariate series by separate (conditional) univariate dynamic linear models, where each series has contemporaneous variables as regressors in its model. Calculating the forecast covariance matrix (which is required for calculating forecast variances in the LMDM) is not always straightforward in its current formulation. In this paper we introduce a simple algebraic form for calculating LMDM forecast covariances. Calculation of the covariance between model regression components can also be useful and we shall present a simple algebraic method for calculating these component covariances. In the LMDM formulation, certain pairs of series are constrained to have zero forecast covariance. We shall also introduce a possible method to relax this restriction. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
Consider forecasting the economic variable Yt+h with predictors X t, where h is the forecast horizon. This paper introduces a semiparametric method that generates forecast intervals of Yt+h| X t from point forecast models. First, the point forecast model is estimated, thereby taking advantage of its predictive power. Then, nonparametric estimation of the conditional distribution function (CDF) of the forecast error conditional on X t builds the rest of the forecast distribution around the point forecast, from which symmetric and minimum‐length forecast intervals for Yt+h| X t can be constructed. Under mild regularity conditions, asymptotic analysis shows that (1) regardless of the quality of the point forecast model (i.e., it may be misspecified), forecast quantiles are consistent and asymptotically normal; (2) minimum length forecast intervals are consistent. Proposals for bandwidth selection and dimension reduction are made. Three sets of simulations show that for reasonable point forecast models the method has significant advantages over two existing approaches to interval forecasting: one that requires the point forecast model to be correctly specified, and one that is based on fully nonparametric CDF estimate of Yt+h| X t. An application to exchange rate forecasting is presented. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
Using the 'standard' approach to forecasting in the vector autoregressive moving average model, we establish basic general results on exact finite sample forecasts and their mean squared error matrices. Comparison between the exact and conditional methods of initiating the finite sample forecast calculations is presented, and a few illustrative cases are given.  相似文献   

13.
This paper proposes a parsimonious threshold stochastic volatility (SV) model for financial asset returns. Instead of imposing a threshold value on the dynamics of the latent volatility process of the SV model, we assume that the innovation of the mean equation follows a threshold distribution in which the mean innovation switches between two regimes. In our model, the threshold is treated as an unknown parameter. We show that the proposed threshold SV model can not only capture the time‐varying volatility of returns, but can also accommodate the asymmetric shape of conditional distribution of the returns. Parameter estimation is carried out by using Markov chain Monte Carlo methods. For model selection and volatility forecast, an auxiliary particle filter technique is employed to approximate the filter and prediction distributions of the returns. Several experiments are conducted to assess the robustness of the proposed model and estimation methods. In the empirical study, we apply our threshold SV model to three return time series. The empirical analysis results show that the threshold parameter has a non‐zero value and the mean innovations belong to two separately distinct regimes. We also find that the model with an unknown threshold parameter value consistently outperforms the model with a known threshold parameter value. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper the relative forecast performance of nonlinear models to linear models is assessed by the conditional probability that the absolute forecast error of the nonlinear forecast is smaller than that of the linear forecast. The comparison probability is explicitly expressed and is shown to be an increasing function of the distance between nonlinear and linear forecasts under certain conditions. This expression of the comparison probability may not only be useful in determining the predictor, which is either a more accurate or a simpler forecast, to be used but also provides a good explanation for an odd phenomenon discussed by Pemberton. The relative forecast performance of a nonlinear model to a linear model is demonstrated to be sensitive to its forecast origins. A new forecast is thus proposed to improve the relative forecast performance of nonlinear models based on forecast origins. © 1997 John Wiley & Sons, Ltd.  相似文献   

15.
We introduce a long‐memory autoregressive conditional Poisson (LMACP) model to model highly persistent time series of counts. The model is applied to forecast quoted bid–ask spreads, a key parameter in stock trading operations. It is shown that the LMACP nicely captures salient features of bid–ask spreads like the strong autocorrelation and discreteness of observations. We discuss theoretical properties of LMACP models and evaluate rolling‐window forecasts of quoted bid–ask spreads for stocks traded at NYSE and NASDAQ. We show that Poisson time series models significantly outperform forecasts from AR, ARMA, ARFIMA, ACD and FIACD models. The economic significance of our results is supported by the evaluation of a trade schedule. Scheduling trades according to spread forecasts we realize cost savings of up to 14 % of spread transaction costs. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
We extend the analysis of Christoffersen and Diebold (1998) on long‐run forecasting in cointegrated systems to multicointegrated systems. For the forecast evaluation we consider several loss functions, each of which has a particular interpretation in the context of stock‐flow models where multicointegration typically occurs. A loss function based on a standard mean square forecast error (MSFE) criterion focuses on the forecast errors of the flow variables alone. Likewise, a loss function based on the triangular representation of cointegrated systems (suggested by Christoffersen and Diebold) considers forecast errors associated with changes in both stock (modelled through the cointegrating restrictions) and flow variables. We suggest a new loss function based on the triangular representation of multicointegrated systems which further penalizes deviations from the long‐run relationship between the levels of stock and flow variables as well as changes in the flow variables. Among other things, we show that if one is concerned with all possible long‐run relations between stock and flow variables, this new loss function entails high and increasing forecasting gains compared to both the standard MSFE criterion and Christoffersen and Diebold's criterion. This paper demonstrates the importance of carefully selecting loss functions in forecast evaluation of models involving stock and flow variables. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
The autoregressive conditional heteroscedastic (ARCH) model and its extensions have been widely used in modelling changing variances in financial time series. Since the asset return distributions frequently display tails heavier than normal distributions, it is worth while studying robust ARCH modelling without a specific distribution assumption. In this paper, rather than modelling the conditional variance, we study ARCH modelling for the conditional scale. We examine the L1‐estimation of ARCH models and derive the limiting distributions of the estimators. A robust standardized absolute residual autocorrelation based on least absolute deviation estimation is proposed. Then a robust portmanteau statistic is constructed to test the adequacy of the model, especially the specification of the conditional scale. We obtain their asymptotic distributions under mild conditions. Examples show that the suggested L1‐norm estimators and the goodness‐of‐fit test are robust against error distributions and are accurate for moderate sample sizes. This paper provides a useful tool in modelling conditional heteroscedastic time series data. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
Structural change and the combination of forecasts   总被引:1,自引:0,他引:1  
Forecasters are generally concerned about the properties of model-based predictions in the presence of structural change. In this paper, it is argued that forecast errors can under those conditions be greatly reduced through systematic combination of forecasts. We propose various extensions of the standard regression-based theory of forecast combination. Rolling weighted least squares and time-varying parameter techniques are shown to be useful generalizations of the basic framework. Numerical examples, based on various types of structural change in the constituent forecasts, indicate that the potential reduction in forecast error variance through these methods is very significant. The adaptive nature of these updating procedures greatly enhances the effect of risk-spreading embodied in standard combination techniques.  相似文献   

20.
Asymmetry has been well documented in the business cycle literature. The asymmetric business cycle suggests that major macroeconomic series, such as a country's unemployment rate, are non‐linear and, therefore, the use of linear models to explain their behaviour and forecast their future values may not be appropriate. Many researchers have focused on providing evidence for the non‐linearity in the unemployment series. Only recently have there been some developments in applying non‐linear models to estimate and forecast unemployment rates. A major concern of non‐linear modelling is the model specification problem; it is very hard to test all possible non‐linear specifications, and to select the most appropriate specification for a particular model. Artificial neural network (ANN) models provide a solution to the difficulty of forecasting unemployment over the asymmetric business cycle. ANN models are non‐linear, do not rely upon the classical regression assumptions, are capable of learning the structure of all kinds of patterns in a data set with a specified degree of accuracy, and can then use this structure to forecast future values of the data. In this paper, we apply two ANN models, a back‐propagation model and a generalized regression neural network model to estimate and forecast post‐war aggregate unemployment rates in the USA, Canada, UK, France and Japan. We compare the out‐of‐sample forecast results obtained by the ANN models with those obtained by several linear and non‐linear times series models currently used in the literature. It is shown that the artificial neural network models are able to forecast the unemployment series as well as, and in some cases better than, the other univariate econometrics time series models in our test. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号