共查询到20条相似文献,搜索用时 0 毫秒
1.
In this paper we compare several multi‐period volatility forecasting models, specifically from MIDAS and HAR families. We perform our comparisons in terms of out‐of‐sample volatility forecasting accuracy. We also consider combinations of the models' forecasts. Using intra‐daily returns of the BOVESPA index, we calculate volatility measures such as realized variance, realized power variation and realized bipower variation to be used as regressors in both models. Further, we use a nonparametric procedure for separately measuring the continuous sample path variation and the discontinuous jump part of the quadratic variation process. Thus MIDAS and HAR specifications with the continuous sample path and jump variability measures as separate regressors are estimated. Our results in terms of mean squared error suggest that regressors involving volatility measures which are robust to jumps (i.e. realized bipower variation and realized power variation) are better at forecasting future volatility. However, we find that, in general, the forecasts based on these regressors are not statistically different from those based on realized variance (the benchmark regressor). Moreover, we find that, in general, the relative forecasting performances of the three approaches (i.e. MIDAS, HAR and forecast combinations) are statistically equivalent. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
2.
Philip Hans Franses;Jiahui Zou;Wendun Wang; 《Journal of forecasting》2024,43(8):3194-3202
This paper puts forward a new and simple method to combine forecasts, which is particularly useful when the forecasts are strongly correlated. It is based on the Mincer Zarnowitz regression, and a subsequent determination using Shapley values of the weights of the forecasts in a new combination. For a stylized case, it is proved that such a Shapley-value-based combination improves upon an equal-weight combination. Simulation experiments and a detailed illustration show the merits of the Shapley-value-based forecast combination. 相似文献
3.
James W. Taylor 《Journal of forecasting》1999,18(2):111-128
A widely used approach to evaluating volatility forecasts uses a regression framework which measures the bias and variance of the forecast. We show that the associated test for bias is inappropriate before introducing a more suitable procedure which is based on the test for bias in a conditional mean forecast. Although volatility has been the most common measure of the variability in a financial time series, in many situations confidence interval forecasts are required. We consider the evaluation of interval forecasts and present a regression‐based procedure which uses quantile regression to assess quantile estimator bias and variance. We use exchange rate data to illustrate the proposal by evaluating seven quantile estimators, one of which is a new non‐parametric autoregressive conditional heteroscedasticity quantile estimator. The empirical analysis shows that the new evaluation procedure provides useful insight into the quality of quantile estimators. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
4.
The availability of numerous modeling approaches for volatility forecasting leads to model uncertainty for both researchers and practitioners. A large number of studies provide evidence in favor of combination methods for forecasting a variety of financial variables, but most of them are implemented on returns forecasting and evaluate their performance based solely on statistical evaluation criteria. In this paper, we combine various volatility forecasts based on different combination schemes and evaluate their performance in forecasting the volatility of the S&P 500 index. We use an exhaustive variety of combination methods to forecast volatility, ranging from simple techniques to time-varying techniques based on the past performance of the single models and regression techniques. We then evaluate the forecasting performance of single and combination volatility forecasts based on both statistical and economic loss functions. The empirical analysis in this paper yields an important conclusion. Although combination forecasts based on more complex methods perform better than the simple combinations and single models, there is no dominant combination technique that outperforms the rest in both statistical and economic terms. 相似文献
5.
Carlos Díaz 《Journal of forecasting》2018,37(3):316-326
This paper shows how to extract the density of information shocks from revisions of the Bank of England's inflation density forecasts. An information shock is defined in this paper as a random variable that contains the set of information made available between two consecutive forecasting exercises and that has been incorporated into a revised forecast for a fixed point event. Studying the moments of these information shocks can be useful in understanding how the Bank has changed its assessment of risks surrounding inflation in the light of new information, and how it has modified its forecasts accordingly. The variance of the information shock is interpreted in this paper as a new measure of ex ante inflation uncertainty that measures the uncertainty that the Bank anticipates information perceived in a particular quarter will pose on inflation. A measure of information absorption that indicates the approximate proportion of the information content in a revised forecast that is attributable to information made available since the last forecast release is also proposed. 相似文献
6.
The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that exploit volatility persistence to emphasise certain losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta. 相似文献
7.
In multivariate volatility prediction, identifying the optimal forecasting model is not always a feasible task. This is mainly due to the curse of dimensionality typically affecting multivariate volatility models. In practice only a subset of the potentially available models can be effectively estimated, after imposing severe constraints on the dynamic structure of the volatility process. It follows that in most applications the working forecasting model can be severely misspecified. This situation leaves scope for the application of forecast combination strategies as a tool for improving the predictive accuracy. The aim of the paper is to propose some alternative combination strategies and compare their performances in forecasting high‐dimensional multivariate conditional covariance matrices for a portfolio of US stock returns. In particular, we will consider the combination of volatility predictions generated by multivariate GARCH models, based on daily returns, and dynamic models for realized covariance matrices, built from intra‐daily returns. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
8.
In this paper, we examine the use of non‐parametric Neural Network Regression (NNR) and Recurrent Neural Network (RNN) regression models for forecasting and trading currency volatility, with an application to the GBP/USD and USD/JPY exchange rates. Both the results of the NNR and RNN models are benchmarked against the simpler GARCH alternative and implied volatility. Two simple model combinations are also analysed. The intuitively appealing idea of developing a nonlinear nonparametric approach to forecast FX volatility, identify mispriced options and subsequently develop a trading strategy based upon this process is implemented for the first time on a comprehensive basis. Using daily data from December 1993 through April 1999, we develop alternative FX volatility forecasting models. These models are then tested out‐of‐sample over the period April 1999–May 2000, not only in terms of forecasting accuracy, but also in terms of trading efficiency: in order to do so, we apply a realistic volatility trading strategy using FX option straddles once mispriced options have been identified. Allowing for transaction costs, most trading strategies retained produce positive returns. RNN models appear as the best single modelling approach yet, somewhat surprisingly, model combination which has the best overall performance in terms of forecasting accuracy, fails to improve the RNN‐based volatility trading results. Another conclusion from our results is that, for the period and currencies considered, the currency option market was inefficient and/or the pricing formulae applied by market participants were inadequate. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
9.
In this paper we study the performance of the GARCH model and two of its non-linear modifications to forecast weekly stock market volatility. The models are the Quadratic GARCH (Engle and Ng, 1993) and the Glosten, Jagannathan and Runkle (1992) models which have been proposed to describe, for example, the often observed negative skewness in stock market indices. We find that the QGARCH model is best when the estimation sample does not contain extreme observations such as the 1987 stock market crash and that the GJR model cannot be recommended for forecasting. 相似文献
10.
This paper uses high‐frequency continuous intraday electricity price data from the EPEX market to estimate and forecast realized volatility. Three different jump tests are used to break down the variation into jump and continuous components using quadratic variation theory. Several heterogeneous autoregressive models are then estimated for the logarithmic and standard deviation transformations. Generalized autoregressive conditional heteroskedasticity (GARCH) structures are included in the error terms of the models when evidence of conditional heteroskedasticity is found. Model selection is based on various out‐of‐sample criteria. Results show that decomposition of realized volatility is important for forecasting and that the decision whether to include GARCH‐type innovations might depend on the transformation selected. Finally, results are sensitive to the jump test used in the case of the standard deviation transformation. 相似文献
11.
Mauro Costantini Jesus Crespo Cuaresma Jaroslava Hlouskova 《Journal of forecasting》2016,35(7):652-668
We provide a comprehensive study of out‐of‐sample forecasts for the EUR/USD exchange rate based on multivariate macroeconomic models and forecast combinations. We use profit maximization measures based on directional accuracy and trading strategies in addition to standard loss minimization measures. When comparing predictive accuracy and profit measures, data snooping bias free tests are used. The results indicate that forecast combinations, in particular those based on principal components of forecasts, help to improve over benchmark trading strategies, although the excess return per unit of deviation is limited. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
12.
This paper introduces a novel generalized autoregressive conditional heteroskedasticity–mixed data sampling–extreme shocks (GARCH-MIDAS-ES) model for stock volatility to examine whether the importance of extreme shocks changes in different time ranges. Based on different combinations of the short- and long-term effects caused by extreme events, we extend the standard GARCH-MIDAS model to characterize the different responses of the stock market for short- and long-term horizons, separately or in combination. The unique timespan of nearly 100 years of the Dow Jones Industrial Average (DJIA) daily returns allows us to understand the stock market volatility under extreme shocks from a historical perspective. The in-sample empirical results clearly show that the DJIA stock volatility is best fitted to the GARCH-MIDAS-SLES model by including the short- and long-term impacts of extreme shocks for all forecasting horizons. The out-of-sample results and robustness tests emphasize the significance of decomposing the effect of extreme shocks into short- and long-term effects to improve the accuracy of the DJIA volatility forecasts. 相似文献
13.
Multifractal models have recently been introduced as a new type of data‐generating process for asset returns and other financial data. Here we propose an adaptation of this model for realized volatility. We estimate this new model via generalized method of moments and perform forecasting by means of best linear forecasts derived via the Levinson–Durbin algorithm. Its out‐of‐sample performance is compared against other popular time series specifications. Using an intra‐day dataset for five major international stock market indices, we find that the the multifractal model for realized volatility improves upon forecasts of its earlier counterparts based on daily returns and of many other volatility models. While the more traditional RV‐ARFIMA model comes out as the most successful model (in terms of the number of cases in which it has the best forecasts for all combinations of forecast horizons and evaluation criteria), the new model performs often significantly better during the turbulent times of the recent financial crisis. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
14.
This study examines the intraday S&P 500 implied volatility index (VIX) to determine when the index contains the most information for volatility forecasting. The findings indicate that, in general, VIX levels around noon are most informative for predicting realized volatility. We posit that the VIX performs better during this time period because trading motivation around noon is less complex, and therefore trades contain more information on the market expectation of future volatility. Further investigation on the 2008 financial crisis period suggests that market participants become more cautious, and thus the forecasting performance is sustained until the market's close. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
15.
Fabian Baetje 《Journal of forecasting》2018,37(1):37-63
A variety of recent studies provide a skeptical view on the predictability of stock returns. Empirical evidence shows that most prediction models suffer from a loss of information, model uncertainty, and structural instability by relying on low‐dimensional information sets. In this study, we evaluate the predictive ability of various lately refined forecasting strategies, which handle these issues by incorporating information from many potential predictor variables simultaneously. We investigate whether forecasting strategies that (i) combine information and (ii) combine individual forecasts are useful to predict US stock returns, that is, the market excess return, size, value, and the momentum premium. Our results show that methods combining information have remarkable in‐sample predictive ability. However, the out‐of‐sample performance suffers from highly volatile forecast errors. Forecast combinations face a better bias–efficiency trade‐off, yielding a consistently superior forecast performance for the market excess return and the size premium even after the 1970s. 相似文献
16.
This paper illustrates the importance of density forecasting and forecast evaluation in portfolio decision making. The decision‐making environment is fully described for an investor seeking to optimally allocate her portfolio between long and short Treasury bills, over investment horizons of up to 2 years. We examine the impact of parameter uncertainty and predictability in bond returns on the investor's allocation and we describe how the forecasts are computed and used in this context. Both statistical and decision‐based criteria are used to assess the predictability of returns. Our results show sensitivity to the evaluation criterion used and, in the context of investment decision making under an economic value criterion, we find some potential gain for the investor from assuming predictability. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
17.
In a conditional predictive ability test framework, we investigate whether market factors influence the relative conditional predictive ability of realized measures (RMs) and implied volatility (IV), which is able to examine the asynchronism in their forecasting accuracy, and further analyze their unconditional forecasting performance for volatility forecast. Our results show that the asynchronism can be detected significantly and is strongly related to certain market factors, and the comparison between RMs and IV on average forecast performance is more efficient than previous studies. Finally, we use the factors to extend the empirical similarity (ES) approach for combination of forecasts derived from RMs and IV. 相似文献
18.
While much research related to forecasting return volatility does so in a univariate setting, this paper includes proxies for information flows to forecast intra‐day volatility for the IBEX 35 futures market. The belief is that volume or the number of transactions conveys important information about the market that may be useful in forecasting. Our results suggest that augmenting a variety of GARCH‐type models with these proxies lead to improved forecasts across a range of intra‐day frequencies. Furthermore, our results present an interesting picture whereby the PARCH model generally performs well at the highest frequencies and shorter forecasting horizons, whereas the component model performs well at lower frequencies and longer forecast horizons. Both models attempt to capture long memory; the PARCH model allows for exponential decay in the autocorrelation function, while the component model captures trend volatility, which dominates over a longer horizon. These characteristics are likely to explain the success of each model. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献
19.
In this study, we explore the effect of cojumps within the agricultural futures market, and cojumps between the agricultural futures market and the stock market, on stock volatility forecasting. Also, we take into account large and small components of cojumps. We have several noteworthy findings. First, large jumps may lead to more substantial fluctuations and are more powerful than small jumps. The effect of cojumps and their decompositions on future volatility are mixed. Second, a model including large and small cojumps between the agricultural futures market and the stock market can achieve a higher forecasting accuracy, implying that large and small cojumps contain more useful predictive information than cojumps themselves. Third, our conclusions are robust based on various robustness tests such as the realized kernel, expanding forecasts, different forecasting windows, different jump tests, and different threshold values. 相似文献
20.
Nima Nonejad 《Journal of forecasting》2020,39(7):1119-1141
We investigate whether crude oil price volatility is predictable by conditioning on macroeconomic variables. We consider a large number of predictors, take into account the possibility that relative predictive performance varies over the out-of-sample period, and shed light on the economic drivers of crude oil price volatility. Results using monthly data from 1983:M1 to 2018:M12 document that variables related to crude oil production, economic uncertainty and variables that either describe the current stance or provide information about the future state of the economy forecast crude oil price volatility at the population level 1 month ahead. On the other hand, evidence of finite-sample predictability is very weak. A detailed examination of our out-of-sample results using the fluctuation test suggests that this is because relative predictive performance changes drastically over the out-of-sample period. The predictive power associated with the more successful macroeconomic variables concentrates around the Great Recession until 2015. They also generate the strongest signal of a decrease in the price of crude oil towards the end of 2008. 相似文献