共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper, we investigate the time series properties of S&P 100 volatility and the forecasting performance of different volatility models. We consider several nonparametric and parametric volatility measures, such as implied, realized and model‐based volatility, and show that these volatility processes exhibit an extremely slow mean‐reverting behavior and possible long memory. For this reason, we explicitly model the near‐unit root behavior of volatility and construct median unbiased forecasts by approximating the finite‐sample forecast distribution using bootstrap methods. Furthermore, we produce prediction intervals for the next‐period implied volatility that provide important information about the uncertainty surrounding the point forecasts. Finally, we apply intercept corrections to forecasts from misspecified models which dramatically improve the accuracy of the volatility forecasts. Copyright © 2006 John Wiley & Sons, Ltd. 相似文献
2.
In a conditional predictive ability test framework, we investigate whether market factors influence the relative conditional predictive ability of realized measures (RMs) and implied volatility (IV), which is able to examine the asynchronism in their forecasting accuracy, and further analyze their unconditional forecasting performance for volatility forecast. Our results show that the asynchronism can be detected significantly and is strongly related to certain market factors, and the comparison between RMs and IV on average forecast performance is more efficient than previous studies. Finally, we use the factors to extend the empirical similarity (ES) approach for combination of forecasts derived from RMs and IV. 相似文献
3.
The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that exploit volatility persistence to emphasise certain losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta. 相似文献
4.
While much research related to forecasting return volatility does so in a univariate setting, this paper includes proxies for information flows to forecast intra‐day volatility for the IBEX 35 futures market. The belief is that volume or the number of transactions conveys important information about the market that may be useful in forecasting. Our results suggest that augmenting a variety of GARCH‐type models with these proxies lead to improved forecasts across a range of intra‐day frequencies. Furthermore, our results present an interesting picture whereby the PARCH model generally performs well at the highest frequencies and shorter forecasting horizons, whereas the component model performs well at lower frequencies and longer forecast horizons. Both models attempt to capture long memory; the PARCH model allows for exponential decay in the autocorrelation function, while the component model captures trend volatility, which dominates over a longer horizon. These characteristics are likely to explain the success of each model. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
5.
Martin Feldkircher 《Journal of forecasting》2012,31(4):361-376
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
6.
Gabriele Di Filippo 《Journal of forecasting》2015,34(8):619-648
The paper forecasts consumer price inflation in the euro area (EA) and in the USA between 1980:Q1 and 2012:Q4 based on a large set of predictors, with dynamic model averaging (DMA) and dynamic model selection (DMS). DMA/DMS allows not solely for coefficients to change over time, but also for changes in the entire forecasting model over time. DMA/DMS provides on average the best inflation forecasts with regard to alternative approaches (such as the random walk). DMS outperforms DMA. These results are robust for different sample periods and for various forecast horizons. The paper highlights common features between the USA and the EA. First, two groups of predictors forecast inflation: temporary fundamentals that have a frequent impact on inflation but only for short time periods; and persistent fundamentals whose switches are less frequent over time. Second, the importance of some variables (particularly international food commodity prices, house prices and oil prices) as predictors for consumer price index inflation increases when such variables experience large shocks. The paper also shows that significant differences prevail in the forecasting models between the USA and the EA. Such differences can be explained by the structure of these respective economies. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
7.
This paper evaluates the performance of conditional variance models using high‐frequency data of the National Stock Index (S&P CNX NIFTY) and attempts to determine the optimal sampling frequency for the best daily volatility forecast. A linear combination of the realized volatilities calculated at two different frequencies is used as benchmark to evaluate the volatility forecasting ability of the conditional variance models (GARCH (1, 1)) at different sampling frequencies. From the analysis, it is found that sampling at 30 minutes gives the best forecast for daily volatility. The forecasting ability of these models is deteriorated, however, by the non‐normal property of mean adjusted returns, which is an assumption in conditional variance models. Nevertheless, the optimum frequency remained the same even in the case of different models (EGARCH and PARCH) and different error distribution (generalized error distribution, GED) where the error is reduced to a certain extent by incorporating the asymmetric effect on volatility. Our analysis also suggests that GARCH models with GED innovations or EGRACH and PARCH models would give better estimates of volatility with lower forecast error estimates. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
8.
Nima Nonejad 《Journal of forecasting》2020,39(7):1119-1141
We investigate whether crude oil price volatility is predictable by conditioning on macroeconomic variables. We consider a large number of predictors, take into account the possibility that relative predictive performance varies over the out-of-sample period, and shed light on the economic drivers of crude oil price volatility. Results using monthly data from 1983:M1 to 2018:M12 document that variables related to crude oil production, economic uncertainty and variables that either describe the current stance or provide information about the future state of the economy forecast crude oil price volatility at the population level 1 month ahead. On the other hand, evidence of finite-sample predictability is very weak. A detailed examination of our out-of-sample results using the fluctuation test suggests that this is because relative predictive performance changes drastically over the out-of-sample period. The predictive power associated with the more successful macroeconomic variables concentrates around the Great Recession until 2015. They also generate the strongest signal of a decrease in the price of crude oil towards the end of 2008. 相似文献
9.
We observe that daily highs and lows of stock prices do not diverge over time and, hence, adopt the cointegration concept and the related vector error correction model (VECM) to model the daily high, the daily low, and the associated daily range data. The in‐sample results attest to the importance of incorporating high–low interactions in modeling the range variable. In evaluating the out‐of‐sample forecast performance using both mean‐squared forecast error and direction of change criteria, it is found that the VECM‐based low and high forecasts offer some advantages over alternative forecasts. The VECM‐based range forecasts, on the other hand, do not always dominate—the forecast rankings depend on the choice of evaluation criterion and the variables being forecast. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
10.
We decompose economic uncertainty into \"good\" and \"bad\" components according to the sign of innovations. Our results indicate that bad uncertainty provides stronger predictive content regarding future market volatility than good uncertainty. The asymmetric models with good and bad uncertainties forecast market volatility in a better way than the symmetric models with overall uncertainty. The combination for asymmetric uncertainty models significantly outperforms the benchmark of autoregression, as well as the combination for symmetric models. The revealed volatility predictability is further demonstrated to be economically significant in the framework of portfolio allocation. 相似文献
11.
Volatility models such as GARCH, although misspecified with respect to the data‐generating process, may well generate volatility forecasts that are unconditionally unbiased. In other words, they generate variance forecasts that, on average, are equal to the integrated variance. However, many applications in finance require a measure of return volatility that is a non‐linear function of the variance of returns, rather than of the variance itself. Even if a volatility model generates forecasts of the integrated variance that are unbiased, non‐linear transformations of these forecasts will be biased estimators of the same non‐linear transformations of the integrated variance because of Jensen's inequality. In this paper, we derive an analytical approximation for the unconditional bias of estimators of non‐linear transformations of the integrated variance. This bias is a function of the volatility of the forecast variance and the volatility of the integrated variance, and depends on the concavity of the non‐linear transformation. In order to estimate the volatility of the unobserved integrated variance, we employ recent results from the realized volatility literature. As an illustration, we estimate the unconditional bias for both in‐sample and out‐of‐sample forecasts of three non‐linear transformations of the integrated standard deviation of returns for three exchange rate return series, where a GARCH(1, 1) model is used to forecast the integrated variance. Our estimation results suggest that, in practice, the bias can be substantial. Copyright © 2006 John Wiley & Sons, Ltd. 相似文献
12.
Peter McAdam;Anders Warne; 《Journal of forecasting》2024,43(5):1153-1172
Euro area real-time density forecasts from three dynamic stochastic general equilibrium (DSGE) and three Bayesian vector autoregression (BVAR) models are compared with six combination methods over the sample 2001Q1–2019Q4. The terms information and observation lag are introduced to distinguish time shifts between data vintages and actuals used to compute model weights and compare the forecast, respectively. Bounds for finite mixture combinations are presented, allowing for benchmarking them given the models. Empirically, combinations with limited weight variation often improve upon the individual models for the output and the joint forecasts with inflation. This reflects overconfident BVAR forecasts before the Great Recession. For inflation, a BVAR model typically performs best. 相似文献
13.
The linear multiregression dynamic model (LMDM) is a Bayesian dynamic model which preserves any conditional independence and causal structure across a multivariate time series. The conditional independence structure is used to model the multivariate series by separate (conditional) univariate dynamic linear models, where each series has contemporaneous variables as regressors in its model. Calculating the forecast covariance matrix (which is required for calculating forecast variances in the LMDM) is not always straightforward in its current formulation. In this paper we introduce a simple algebraic form for calculating LMDM forecast covariances. Calculation of the covariance between model regression components can also be useful and we shall present a simple algebraic method for calculating these component covariances. In the LMDM formulation, certain pairs of series are constrained to have zero forecast covariance. We shall also introduce a possible method to relax this restriction. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
14.
James W. Taylor 《Journal of forecasting》1999,18(2):111-128
A widely used approach to evaluating volatility forecasts uses a regression framework which measures the bias and variance of the forecast. We show that the associated test for bias is inappropriate before introducing a more suitable procedure which is based on the test for bias in a conditional mean forecast. Although volatility has been the most common measure of the variability in a financial time series, in many situations confidence interval forecasts are required. We consider the evaluation of interval forecasts and present a regression‐based procedure which uses quantile regression to assess quantile estimator bias and variance. We use exchange rate data to illustrate the proposal by evaluating seven quantile estimators, one of which is a new non‐parametric autoregressive conditional heteroscedasticity quantile estimator. The empirical analysis shows that the new evaluation procedure provides useful insight into the quality of quantile estimators. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
15.
Previous research found that the US business cycle leads the European one by a few quarters, and can therefore be useful in predicting euro area gross domestic product (GDP). In this paper we investigate whether additional predictive power can be gained by adding selected financial variables belonging to either the USA or the euro area. We use vector autoregressions (VARs) that include the US and euro area GDPs as well as growth in the Rest of the World and selected combinations of financial variables. Out‐of‐sample root mean square forecast errors (RMSEs) evidence that adding financial variables produces a slightly smaller error in forecasting US economic activity. This weak macro‐financial linkage is even weaker in the euro area, where financial indicators do not improve short‐ and medium‐term GDP forecasts even when their timely availability relative to GDP is exploited. It can be conjectured that neither US nor European financial variables help predict euro area GDP as the US GDP has already embodied this information. However, we show that the finding that financial variables have no predictive power for future activity in the euro area relates to the unconditional nature of the RMSE metric. When forecasting ability is assessed as if in real time (i.e. conditionally on the information available at the time when forecasts are made), we find that models using financial variables would have been preferred in several episodes and in particular between 1999 and 2002. Copyright 2011 John Wiley & Sons, Ltd. 相似文献
16.
For leverage heterogeneous autoregressive (LHAR) models with jumps and other covariates, called LHARX models, multistep forecasts are derived. Some optimal properties of forecasts in terms of conditional volatilities are discussed, which tells us to model conditional volatility for return but not for the LHARX regression error and other covariates. Forecast standard errors are constructed for which we need to model conditional volatilities both for return and for LHAR regression error and other blue covariates. The proposed methods are well illustrated by forecast analysis for the realized volatilities of the US stock price indexes: the S&P 500, the NASDAQ, the DJIA, and the RUSSELL indexes. 相似文献
17.
Mohammad Mohammadi 《Journal of forecasting》2017,36(7):859-866
The best prediction of generalized autoregressive conditional heteroskedasticity (GARCH) models with α‐stable innovations, α‐stable power‐GARCH models and autoregressive moving average (ARMA) models with GARCH in mean effects (ARMA‐GARCH‐M) are proposed. We present a sufficient condition for stationarity of α‐stable GARCH models. The prediction methods are easy to implement in practice. The proposed prediction methods are applied for predicting future values of the daily SP500 stock market and wind speed data. 相似文献
18.
The variance of a portfolio can be forecast using a single index model or the covariance matrix of the portfolio. Using univariate and multivariate conditional volatility models, this paper evaluates the performance of the single index and portfolio models in forecasting value‐at‐risk (VaR) thresholds of a portfolio. Likelihood ratio tests of unconditional coverage, independence and conditional coverage of the VaR forecasts suggest that the single‐index model leads to excessive and often serially dependent violations, while the portfolio model leads to too few violations. The single‐index model also leads to lower daily Basel Accord capital charges. The univariate models which display correct conditional coverage lead to higher capital charges than models which lead to too many violations. Overall, the Basel Accord penalties appear to be too lenient and favour models which have too many violations. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
19.
S. Mahdi Barakchian 《Journal of forecasting》2012,31(5):401-422
Do long‐run equilibrium relations suggested by economic theory help to improve the forecasting performance of a cointegrated vector error correction model (VECM)? In this paper we try to answer this question in the context of a two‐country model developed for the Canadian and US economies. We compare the forecasting performance of the exactly identified cointegrated VECMs to the performance of the over‐identified VECMs with the long‐run theory restrictions imposed. We allow for model uncertainty and conduct this comparison for every possible combination of the cointegration ranks of the Canadian and US models. We show that the over‐identified structural cointegrated models generally outperform the exactly identified models in forecasting Canadian macroeconomic variables. We also show that the pooled forecasts generated from the over‐identified models beat most of the individual exactly identified and over‐identified models as well as the VARs in levels and in differences. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
20.
We investigate the dynamic properties of the realized volatility of five agricultural commodity futures by employing the high‐frequency data from Chinese markets and find that the realized volatility exhibits both long memory and regime switching. To capture these properties simultaneously, we utilize a Markov switching autoregressive fractionally integrated moving average (MS‐ARFIMA) model to forecast the realized volatility by combining the long memory process with regime switching component, and compare its forecast performances with the competing models at various horizons. The full‐sample estimation results show that the dynamics of the realized volatility of agricultural commodity futures are characterized by two levels of long memory: one associated with the low‐volatility regime and the other with the high‐volatility regime, and the probability to stay in the low‐volatility regime is higher than that in the high‐volatility regime. The out‐of‐sample volatility forecast results show that the combination of long memory with switching regimes improves the performance of realized volatility forecast, and the proposed model represents a superior out‐of‐sample realized volatility forecast to the competing models. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献