首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 687 毫秒
1.
Volatility models such as GARCH, although misspecified with respect to the data‐generating process, may well generate volatility forecasts that are unconditionally unbiased. In other words, they generate variance forecasts that, on average, are equal to the integrated variance. However, many applications in finance require a measure of return volatility that is a non‐linear function of the variance of returns, rather than of the variance itself. Even if a volatility model generates forecasts of the integrated variance that are unbiased, non‐linear transformations of these forecasts will be biased estimators of the same non‐linear transformations of the integrated variance because of Jensen's inequality. In this paper, we derive an analytical approximation for the unconditional bias of estimators of non‐linear transformations of the integrated variance. This bias is a function of the volatility of the forecast variance and the volatility of the integrated variance, and depends on the concavity of the non‐linear transformation. In order to estimate the volatility of the unobserved integrated variance, we employ recent results from the realized volatility literature. As an illustration, we estimate the unconditional bias for both in‐sample and out‐of‐sample forecasts of three non‐linear transformations of the integrated standard deviation of returns for three exchange rate return series, where a GARCH(1, 1) model is used to forecast the integrated variance. Our estimation results suggest that, in practice, the bias can be substantial. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper we compare several multi‐period volatility forecasting models, specifically from MIDAS and HAR families. We perform our comparisons in terms of out‐of‐sample volatility forecasting accuracy. We also consider combinations of the models' forecasts. Using intra‐daily returns of the BOVESPA index, we calculate volatility measures such as realized variance, realized power variation and realized bipower variation to be used as regressors in both models. Further, we use a nonparametric procedure for separately measuring the continuous sample path variation and the discontinuous jump part of the quadratic variation process. Thus MIDAS and HAR specifications with the continuous sample path and jump variability measures as separate regressors are estimated. Our results in terms of mean squared error suggest that regressors involving volatility measures which are robust to jumps (i.e. realized bipower variation and realized power variation) are better at forecasting future volatility. However, we find that, in general, the forecasts based on these regressors are not statistically different from those based on realized variance (the benchmark regressor). Moreover, we find that, in general, the relative forecasting performances of the three approaches (i.e. MIDAS, HAR and forecast combinations) are statistically equivalent. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
The increase in oil price volatility in recent years has raised the importance of forecasting it accurately for valuing and hedging investments. The paper models and forecasts the crude oil exchange‐traded funds (ETF) volatility index, which has been used in the last years as an important alternative measure to track and analyze the volatility of future oil prices. Analysis of the oil volatility index suggests that it presents features similar to those of the daily market volatility index, such as long memory, which is modeled using well‐known heterogeneous autoregressive (HAR) specifications and new extensions that are based on net and scaled measures of oil price changes. The aim is to improve the forecasting performance of the traditional HAR models by including predictors that capture the impact of oil price changes on the economy. The performance of the new proposals and benchmarks is evaluated with the model confidence set (MCS) and the Generalized‐AutoContouR (G‐ACR) tests in terms of point forecasts and density forecasting, respectively. We find that including the leverage in the conditional mean or variance of the basic HAR model increases its predictive ability. Furthermore, when considering density forecasting, the best models are a conditional heteroskedastic HAR model that includes a scaled measure of oil price changes, and a HAR model with errors following an exponential generalized autoregressive conditional heteroskedasticity specification. In both cases, we consider a flexible distribution for the errors of the conditional heteroskedastic process.  相似文献   

4.
Multifractal models have recently been introduced as a new type of data‐generating process for asset returns and other financial data. Here we propose an adaptation of this model for realized volatility. We estimate this new model via generalized method of moments and perform forecasting by means of best linear forecasts derived via the Levinson–Durbin algorithm. Its out‐of‐sample performance is compared against other popular time series specifications. Using an intra‐day dataset for five major international stock market indices, we find that the the multifractal model for realized volatility improves upon forecasts of its earlier counterparts based on daily returns and of many other volatility models. While the more traditional RV‐ARFIMA model comes out as the most successful model (in terms of the number of cases in which it has the best forecasts for all combinations of forecast horizons and evaluation criteria), the new model performs often significantly better during the turbulent times of the recent financial crisis. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
This paper examines the relative importance of allowing for time‐varying volatility and country interactions in a forecast model of economic activity. Allowing for these issues is done by augmenting autoregressive models of growth with cross‐country weighted averages of growth and the generalized autoregressive conditional heteroskedasticity framework. The forecasts are evaluated using statistical criteria through point and density forecasts, and an economic criterion based on forecasting recessions. The results show that, compared to an autoregressive model, both components improve forecast ability in terms of point and density forecasts, especially one‐period‐ahead forecasts, but that the forecast ability is not stable over time. The random walk model, however, still dominates in terms of forecasting recessions.  相似文献   

6.
The availability of numerous modeling approaches for volatility forecasting leads to model uncertainty for both researchers and practitioners. A large number of studies provide evidence in favor of combination methods for forecasting a variety of financial variables, but most of them are implemented on returns forecasting and evaluate their performance based solely on statistical evaluation criteria. In this paper, we combine various volatility forecasts based on different combination schemes and evaluate their performance in forecasting the volatility of the S&P 500 index. We use an exhaustive variety of combination methods to forecast volatility, ranging from simple techniques to time-varying techniques based on the past performance of the single models and regression techniques. We then evaluate the forecasting performance of single and combination volatility forecasts based on both statistical and economic loss functions. The empirical analysis in this paper yields an important conclusion. Although combination forecasts based on more complex methods perform better than the simple combinations and single models, there is no dominant combination technique that outperforms the rest in both statistical and economic terms.  相似文献   

7.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

8.
This paper considers the forecast accuracy of a wide range of volatility models, with particular emphasis on the use of power transformations. Where one‐period‐ahead forecasts are considered, the power autoregressive models are ranked first by a range of error metrics. Over longer forecast horizons, however, generalized autoregressive conditional heteroscedasticity models are preferred. A value‐at‐risk‐based forecast assessment indicates that, while the forecast errors are independent, they are not independent and identically distributed, although this latter result is sensitive to the choice of forecast horizon. Our results are robust across a number of different asset markets. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
We propose a nonlinear time series model where both the conditional mean and the conditional variance are asymmetric functions of past information. The model is particularly useful for analysing financial time series where it has been noted that there is an asymmetric impact of good news and bad news on volatility (risk) transmission. We introduce a coherent framework for testing asymmetries in the conditional mean and the conditional variance, separately or jointly. To this end we derive both a Wald and a Lagrange multiplier test. Some of the new asymmetric model's moment properties are investigated. Detailed empirical results are given for the daily returns of the composite index of the New York Stock Exchange. There is strong evidence of asymmetry in both the conditional mean and the conditional variance functions. In a genuine out‐of‐sample forecasting experiment the performance of the best fitted asymmetric model, having asymmetries in both conditional mean and conditional variance, is compared with an asymmetric model for the conditional mean, and with no‐change forecasts. This is done both in terms of conditional mean forecasting as well as in terms of risk forecasting. Finally, the paper presents some evidence of asymmetries in the index stock returns of the Group of Seven (G7) industrialized countries. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

10.
Value at risk (VaR) is a risk measure widely used by financial institutions in allocating risk. VaR forecast estimation involves the conditional evaluation of quantiles based on the currently available information. Recent advances in VaR evaluation incorporate a proxy for conditional variance, yielding the conditional autoregressive VaR (CAViaR) models. However, early work in finance literature has shown that the introduction of power transformations has resulted in improvements in volatility forecasting. Having a direct association between volatility and conditional VaR, we adopt power-transformed CAViaR models. We investigate whether the flexible conditional VaR dynamics associated with power-transformed CAViaR models can result in better forecasting results than those assumed by the nontransformed CAViaR models. Estimation in CAViaR models is based on an early-rejection Markov chain Monte Carlo algorithm. We illustrate our forecasting evaluation results using simulated and financial daily return data series. The results demonstrate that there is strong evidence that supports the use of power-transformed CAViaR models when forecasting VaR.  相似文献   

11.
Standard statistical loss functions, such as mean‐squared error, are commonly used for evaluating financial volatility forecasts. In this paper, an alternative evaluation framework, based on probability scoring rules that can be more closely tailored to a forecast user's decision problem, is proposed. According to the decision at hand, the user specifies the economic events to be forecast, the scoring rule with which to evaluate these probability forecasts, and the subsets of the forecasts of particular interest. The volatility forecasts from a model are then transformed into probability forecasts of the relevant events and evaluated using the selected scoring rule and calibration tests. An empirical example using exchange rate data illustrates the framework and confirms that the choice of loss function directly affects the forecast evaluation results. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

12.
The period of extraordinary volatility in euro area headline inflation starting in 2007 raised the question whether forecast combination methods can be used to hedge against bad forecast performance of single models during such periods and provide more robust forecasts. We investigate this issue for forecasts from a range of short‐term forecasting models. Our analysis shows that there is considerable variation of the relative performance of the different models over time. To take that into account we suggest employing performance‐based forecast combination methods—in particular, one with more weight on the recent forecast performance. We compare such an approach with equal forecast combination that has been found to outperform more sophisticated forecast combination methods in the past, and investigate whether it can improve forecast accuracy over the single best model. The time‐varying weights assign weights to the economic interpretations of the forecast stemming from different models. We also include a number of benchmark models in our analysis. The combination methods are evaluated for HICP headline inflation and HICP excluding food and energy. We investigate how forecast accuracy of the combination methods differs between pre‐crisis times, the period after the global financial crisis and the full evaluation period, including the global financial crisis with its extraordinary volatility in inflation. Overall, we find that forecast combination helps hedge against bad forecast performance and that performance‐based weighting outperforms simple averaging. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

13.
This paper assesses the informational content of alternative realized volatility estimators, daily range and implied volatility in multi‐period out‐of‐sample Value‐at‐Risk (VaR) predictions. We use the recently proposed Realized GARCH model combined with the skewed Student's t distribution for the innovations process and a Monte Carlo simulation approach in order to produce the multi‐period VaR estimates. Our empirical findings, based on the S&P 500 stock index, indicate that almost all realized and implied volatility measures can produce statistically and regulatory precise VaR forecasts across forecasting horizons, with the implied volatility being especially accurate in monthly VaR forecasts. The daily range produces inferior forecasting results in terms of regulatory accuracy and Basel II compliance. However, robust realized volatility measures, which are immune against microstructure noise bias or price jumps, generate superior VaR estimates in terms of capital efficiency, as they minimize the opportunity cost of capital and the Basel II regulatory capital. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock Exchange Composite Index and the five sectoral stock indices in Chinese stock markets for the period 4 January 2000 to 30 December 2011. We then conduct both in‐sample tests and out‐of‐sample forecasts to examine the effects of structural breaks on the performance of ARFIMAX‐FIGARCH models for the realized volatility forecast by utilizing a variety of estimation window sizes designed to accommodate potential structural breaks. The results of the in‐sample tests show that there are multiple breaks in all realized volatility series. The results of the out‐of‐sample point forecasts indicate that the combination forecasts with time‐varying weights across individual forecast models estimated with different estimation windows perform well. In particular, nonlinear combination forecasts with the weights chosen based on a non‐parametric kernel regression and linear combination forecasts with the weights chosen based on the non‐negative restricted least squares and Schwarz information criterion appear to be the most accurate methods in point forecasting for realized volatility under structural breaks. We also conduct an interval forecast of the realized volatility for the combination approaches, and find that the interval forecast for nonlinear combination approaches with the weights chosen according to a non‐parametric kernel regression performs best among the competing models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
This paper uses high‐frequency continuous intraday electricity price data from the EPEX market to estimate and forecast realized volatility. Three different jump tests are used to break down the variation into jump and continuous components using quadratic variation theory. Several heterogeneous autoregressive models are then estimated for the logarithmic and standard deviation transformations. Generalized autoregressive conditional heteroskedasticity (GARCH) structures are included in the error terms of the models when evidence of conditional heteroskedasticity is found. Model selection is based on various out‐of‐sample criteria. Results show that decomposition of realized volatility is important for forecasting and that the decision whether to include GARCH‐type innovations might depend on the transformation selected. Finally, results are sensitive to the jump test used in the case of the standard deviation transformation.  相似文献   

16.
In a conditional predictive ability test framework, we investigate whether market factors influence the relative conditional predictive ability of realized measures (RMs) and implied volatility (IV), which is able to examine the asynchronism in their forecasting accuracy, and further analyze their unconditional forecasting performance for volatility forecast. Our results show that the asynchronism can be detected significantly and is strongly related to certain market factors, and the comparison between RMs and IV on average forecast performance is more efficient than previous studies. Finally, we use the factors to extend the empirical similarity (ES) approach for combination of forecasts derived from RMs and IV.  相似文献   

17.
18.
This paper evaluates the performance of conditional variance models using high‐frequency data of the National Stock Index (S&P CNX NIFTY) and attempts to determine the optimal sampling frequency for the best daily volatility forecast. A linear combination of the realized volatilities calculated at two different frequencies is used as benchmark to evaluate the volatility forecasting ability of the conditional variance models (GARCH (1, 1)) at different sampling frequencies. From the analysis, it is found that sampling at 30 minutes gives the best forecast for daily volatility. The forecasting ability of these models is deteriorated, however, by the non‐normal property of mean adjusted returns, which is an assumption in conditional variance models. Nevertheless, the optimum frequency remained the same even in the case of different models (EGARCH and PARCH) and different error distribution (generalized error distribution, GED) where the error is reduced to a certain extent by incorporating the asymmetric effect on volatility. Our analysis also suggests that GARCH models with GED innovations or EGRACH and PARCH models would give better estimates of volatility with lower forecast error estimates. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
Using quantile regression this paper explores the predictability of the stock and bond return distributions as a function of economic state variables. The use of quantile regression allows us to examine specific parts of the return distribution such as the tails and the center, and for a sufficiently fine grid of quantiles we can trace out the entire distribution. A univariate quantile regression model is used to examine the marginal stock and bond return distributions, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that economic state variables predict the stock and bond return distributions in quite different ways in terms of, for example, location shifts, volatility and skewness. Comparing the different economic state variables in terms of their out‐of‐sample forecasting performance, the empirical analysis also shows that the relative accuracy of the state variables varies across the return distribution. Density forecasts based on an assumed normal distribution with forecasted mean and variance is compared to forecasts based on quantile estimates and, in general, the latter yields the best performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
This paper evaluates the accuracy of 1‐month‐ahead systematic (beta) risk forecasts in three return measurement settings; monthly, daily and 30 minutes. It was found that the popular Fama–MacBeth beta from 5 years of monthly returns generates the most accurate beta forecast among estimators based on monthly returns. A realized beta estimator from daily returns over the prior year generates the most accurate beta forecast among estimators based on daily returns. A realized beta estimator from 30‐minute returns over the prior 2 months generates the most accurate beta forecast among estimators based on 30‐minute returns. In environments where low‐, medium‐ and high‐frequency returns are accurately available, beta forecasting with low‐frequency returns are the least accurate and beta forecasting with high‐frequency returns are the most accurate. The improvements in precision of the beta forecasts are demonstrated in portfolio optimization for a targeted beta exposure. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号