首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 921 毫秒
1.
In this paper, we investigate the time series properties of S&P 100 volatility and the forecasting performance of different volatility models. We consider several nonparametric and parametric volatility measures, such as implied, realized and model‐based volatility, and show that these volatility processes exhibit an extremely slow mean‐reverting behavior and possible long memory. For this reason, we explicitly model the near‐unit root behavior of volatility and construct median unbiased forecasts by approximating the finite‐sample forecast distribution using bootstrap methods. Furthermore, we produce prediction intervals for the next‐period implied volatility that provide important information about the uncertainty surrounding the point forecasts. Finally, we apply intercept corrections to forecasts from misspecified models which dramatically improve the accuracy of the volatility forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
This paper compares the information content of realized measures constructed from high‐frequency data and implied volatilities from options in the context of forecasting volatility. The comparison is based on within‐sample and out‐of‐sample (over horizons of 1–22 days) forecasts of daily S&P 500 index return volatility. The paper adds to the findings of previous studies, by considering recent developments in the related practice and the literature. It is shown that, for within‐sample fitting, the realized measure is more informative than the implied volatility. In contrast, the implied volatility is more informative than the realized measure for out‐of‐sample forecasting, in particular for multi‐step‐ahead forecasting. Moreover, we show that it is helpful to use all the information provided by the realized measure and the implied volatility for the within‐sample fitting. For multi‐step‐ahead forecasting, however, it is better to use only the implied volatility. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
This paper assesses the informational content of alternative realized volatility estimators, daily range and implied volatility in multi‐period out‐of‐sample Value‐at‐Risk (VaR) predictions. We use the recently proposed Realized GARCH model combined with the skewed Student's t distribution for the innovations process and a Monte Carlo simulation approach in order to produce the multi‐period VaR estimates. Our empirical findings, based on the S&P 500 stock index, indicate that almost all realized and implied volatility measures can produce statistically and regulatory precise VaR forecasts across forecasting horizons, with the implied volatility being especially accurate in monthly VaR forecasts. The daily range produces inferior forecasting results in terms of regulatory accuracy and Basel II compliance. However, robust realized volatility measures, which are immune against microstructure noise bias or price jumps, generate superior VaR estimates in terms of capital efficiency, as they minimize the opportunity cost of capital and the Basel II regulatory capital. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
The availability of numerous modeling approaches for volatility forecasting leads to model uncertainty for both researchers and practitioners. A large number of studies provide evidence in favor of combination methods for forecasting a variety of financial variables, but most of them are implemented on returns forecasting and evaluate their performance based solely on statistical evaluation criteria. In this paper, we combine various volatility forecasts based on different combination schemes and evaluate their performance in forecasting the volatility of the S&P 500 index. We use an exhaustive variety of combination methods to forecast volatility, ranging from simple techniques to time-varying techniques based on the past performance of the single models and regression techniques. We then evaluate the forecasting performance of single and combination volatility forecasts based on both statistical and economic loss functions. The empirical analysis in this paper yields an important conclusion. Although combination forecasts based on more complex methods perform better than the simple combinations and single models, there is no dominant combination technique that outperforms the rest in both statistical and economic terms.  相似文献   

5.
In this paper, we detect and correct abnormal returns in 17 French stocks returns and the French index CAC40 from additive‐outlier detection method in GARCH models developed by Franses and Ghijsels (1999) and extended to innovative outliers by Charles and Darné (2005). We study the effects of outlying observations on several popular econometric tests. Moreover, we show that the parameters of the equation governing the volatility dynamics are biased when we do not take into account additive and innovative outliers. Finally, we show that the volatility forecast is better when the data are cleaned of outliers for several step‐ahead forecasts (short, medium‐ and long‐term) even if we consider a GARCH‐t process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
Since volatility is perceived as an explicit measure of risk, financial economists have long been concerned with accurate measures and forecasts of future volatility and, undoubtedly, the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model has been widely used for doing so. It appears, however, from some empirical studies that the GARCH model tends to provide poor volatility forecasts in the presence of additive outliers. To overcome the forecasting limitation, this paper proposes a robust GARCH model (RGARCH) using least absolute deviation estimation and introduces a valuable estimation method from a practical point of view. Extensive Monte Carlo experiments substantiate our conjectures. As the magnitude of the outliers increases, the one‐step‐ahead forecasting performance of the RGARCH model has a more significant improvement in two forecast evaluation criteria over both the standard GARCH and random walk models. Strong evidence in favour of the RGARCH model over other competitive models is based on empirical application. By using a sample of two daily exchange rate series, we find that the out‐of‐sample volatility forecasts of the RGARCH model are apparently superior to those of other competitive models. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
For leverage heterogeneous autoregressive (LHAR) models with jumps and other covariates, called LHARX models, multistep forecasts are derived. Some optimal properties of forecasts in terms of conditional volatilities are discussed, which tells us to model conditional volatility for return but not for the LHARX regression error and other covariates. Forecast standard errors are constructed for which we need to model conditional volatilities both for return and for LHAR regression error and other blue covariates. The proposed methods are well illustrated by forecast analysis for the realized volatilities of the US stock price indexes: the S&P 500, the NASDAQ, the DJIA, and the RUSSELL indexes.  相似文献   

8.
This study investigates the forecasting performance of the GARCH(1,1) model by adding an effective covariate. Based on the assumption that many volatility predictors are available to help forecast the volatility of a target variable, this study shows how to construct a covariate from these predictors and plug it into the GARCH(1,1) model. This study presents a method of building a covariate such that the covariate contains the maximum possible amount of predictor information of the predictors for forecasting volatility. The loading of the covariate constructed by the proposed method is simply the eigenvector of a matrix. The proposed method enjoys the advantages of easy implementation and interpretation. Simulations and empirical analysis verify that the proposed method performs better than other methods for forecasting the volatility, and the results are quite robust to model misspecification. Specifically, the proposed method reduces the mean square error of the GARCH(1,1) model by 30% for forecasting the volatility of S&P 500 Index. The proposed method is also useful in improving the volatility forecasting of several GARCH‐family models and for forecasting the value‐at‐risk. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
This paper proposes a new evaluation framework for interval forecasts. Our model‐free test can be used to evaluate interval forecasts and high‐density regions, potentially discontinuous and/or asymmetric. Using a simple J‐statistic, based on the moments defined by the orthonormal polynomials associated with the binomial distribution, this new approach presents many advantages. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypotheses. Third, Monte Carlo simulations show that for realistic sample sizes our GMM test has good small‐sample properties. These results are corroborated by an empirical application on SP500 and Nikkei stock market indexes. It confirms that using this GMM test leads to major consequences for the ex post evaluation of interval forecasts produced by linear versus nonlinear models. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
It is well known that some economic time series can be described by models which allow for either long memory or for occasional level shifts. In this paper we propose to examine the relative merits of these models by introducing a new model, which jointly captures the two features. We discuss representation and estimation. Using simulations, we demonstrate its forecasting ability, relative to the one‐feature models, both in terms of point forecasts and interval forecasts. We illustrate the model for daily S&P500 volatility. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
Tests of forecast encompassing are used to evaluate one‐step‐ahead forecasts of S&P Composite index returns and volatility. It is found that forecasts over the 1990s made from models that include macroeconomic variables tend to be encompassed by those made from a benchmark model which does not include macroeconomic variables. However, macroeconomic variables are found to add significant information to forecasts of returns and volatility over the 1970s. Often in empirical research on forecasting stock index returns and volatility, in‐sample information criteria are used to rank potential forecasting models. Here, none of the forecasting models for the 1970s that include macroeconomic variables are, on the basis of information criteria, preferred to the relevant benchmark specification. Thus, had investors used information criteria to choose between the models used for forecasting over the 1970s considered in this paper, the predictability that tests of encompassing reveal would not have been exploited. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

12.
The leverage effect—the correlation between an asset's return and its volatility—has played a key role in forecasting and understanding volatility and risk. While it is a long standing consensus that leverage effects exist and improve forecasts, empirical evidence puzzlingly does not show that this effect exists for many individual stocks, mischaracterizing risk, and therefore leading to poor predictive performance. We examine this puzzle, with the goal to improve density forecasts, by relaxing the assumption of linearity of the leverage effect. Nonlinear generalizations of the leverage effect are proposed within the Bayesian stochastic volatility framework in order to capture flexible leverage structures. Efficient Bayesian sequential computation is developed and implemented to estimate this effect in a practical, on-line manner. Examining 615 stocks that comprise the S&P500 and Nikkei 225, we find that our proposed nonlinear leverage effect model improves predictive performances for 89% of all stocks compared to the conventional stochastic volatility model.  相似文献   

13.
This paper proposes value‐at risk (VaR) estimation methods that are a synthesis of conditional autoregressive value at risk (CAViaR) time series models and implied volatility. The appeal of this proposal is that it merges information from the historical time series and the different information supplied by the market's expectation of risk. Forecast‐combining methods, with weights estimated using quantile regression, are considered. We also investigate plugging implied volatility into the CAViaR models—a procedure that has not been considered in the VaR area so far. Results for daily index returns indicate that the newly proposed methods are comparable or superior to individual methods, such as the standard CAViaR models and quantiles constructed from implied volatility and the empirical distribution of standardised residuals. We find that the implied volatility has more explanatory power as the focus moves further out into the left tail of the conditional distribution of S&P 500 daily returns. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

14.
Financial data often take the form of a collection of curves that can be observed sequentially over time; for example, intraday stock price curves and intraday volatility curves. These curves can be viewed as a time series of functions that can be observed on equally spaced and dense grids. Owing to the so‐called curse of dimensionality, the nature of high‐dimensional data poses challenges from a statistical perspective; however, it also provides opportunities to analyze a rich source of information, so that the dynamic changes of short time intervals can be better understood. In this paper, we consider forecasting a time series of functions and propose a number of statistical methods that can be used to forecast 1‐day‐ahead intraday stock returns. As we sequentially observe new data, we also consider the use of dynamic updating in updating point and interval forecasts for achieving improved accuracy. The forecasting methods were validated through an empirical study of 5‐minute intraday S&P 500 index returns.  相似文献   

15.
We perform Bayesian model averaging across different regressions selected from a set of predictors that includes lags of realized volatility, financial and macroeconomic variables. In our model average, we entertain different channels of instability by either incorporating breaks in the regression coefficients of each individual model within our model average, breaks in the conditional error variance, or both. Changes in these parameters are driven by mixture distributions for state innovations (MIA) of linear Gaussian state‐space models. This framework allows us to compare models that assume small and frequent as well as models that assume large but rare changes in the conditional mean and variance parameters. Results using S&P 500 monthly and quarterly realized volatility data from 1960 to 2014 suggest that Bayesian model averaging in combination with breaks in the regression coefficients and the error variance through MIA dynamics generates statistically significantly more accurate forecasts than the benchmark autoregressive model. However, compared to a MIA autoregression with breaks in the regression coefficients and the error variance, we fail to provide any drastic improvements.  相似文献   

16.
This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven‐variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non‐stationary, stationary and error‐correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non‐stationary specification outperformed those of the stationary and error‐correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error‐correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

17.
Testing the validity of value‐at‐risk (VaR) forecasts, or backtesting, is an integral part of modern market risk management and regulation. This is often done by applying independence and coverage tests developed by Christoffersen (International Economic Review, 1998; 39(4), 841–862) to so‐called hit‐sequences derived from VaR forecasts and realized losses. However, as pointed out in the literature, these aforementioned tests suffer from low rejection frequencies, or (empirical) power when applied to hit‐sequences derived from simulations matching empirical stylized characteristics of return data. One key observation of the studies is that higher‐order dependence in the hit‐sequences may cause the observed lower power performance. We propose to generalize the backtest framework for VaR forecasts, by extending the original first‐order dependence of Christoffersen to allow for a higher‐ or kth‐order dependence. We provide closed‐form expressions for the tests as well as asymptotic theory. Not only do the generalized tests have power against kth‐order dependence by definition, but also included simulations indicate improved power performance when replicating the aforementioned studies. Further, included simulations show much improved size properties of one of the suggested tests. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

18.
To forecast realized volatility, this paper introduces a multiplicative error model that incorporates heterogeneous components: weekly and monthly realized volatility measures. While the model captures the long‐memory property, estimation simply proceeds using quasi‐maximum likelihood estimation. This paper investigates its forecasting ability using the realized kernels of 34 different assets provided by the Oxford‐Man Institute's Realized Library. The model outperforms benchmark models such as ARFIMA, HAR, Log‐HAR and HEAVY‐RM in within‐sample fitting and out‐of‐sample (1‐, 10‐ and 22‐step) forecasts. It performed best in both pointwise and cumulative comparisons of multi‐step‐ahead forecasts, regardless of loss function (QLIKE or MSE). Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
This paper presents a simple empirical approach to modeling and forecasting market option prices using localized option regressions (LOR). LOR projects market option prices over localized regions of their state space and is robust to assumptions regarding the underlying asset dynamics (e.g. log‐normality) and volatility structure. Our empirical study using 3 years of daily S&P500 options shows that LOR yields smaller out‐of‐sample pricing errors (e.g. 32% 1‐day‐out) relative to an efficient benchmark from the literature and produces option prices free of the volatility smile. In addition to being an efficient and robust option‐modeling and valuation tool for large option books, LOR provides a simple‐to‐implement empirical benchmark for evaluating more complex risk‐neutral models. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

20.
The aim of this paper is to compare the forecasting performance of competing threshold models, in order to capture the asymmetric effect in the volatility. We focus on examining the relative out‐of‐sample forecasting ability of the SETAR‐Threshold GARCH (SETAR‐TGARCH) and the SETAR‐Threshold Stochastic Volatility (SETAR‐THSV) models compared to the GARCH model and Stochastic Volatility (SV) model. However, the main problem in evaluating the predictive ability of volatility models is that the ‘true’ underlying volatility process is not observable and thus a proxy must be defined for the unobservable volatility. For the class of nonlinear state space models (SETAR‐THSV and SV), a modified version of the SIR algorithm has been used to estimate the unknown parameters. The forecasting performance of competing models has been compared for two return time series: IBEX 35 and S&P 500. We explore whether the increase in the complexity of the model implies that its forecasting ability improves. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号