首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 343 毫秒
1.
This paper proposes value‐at risk (VaR) estimation methods that are a synthesis of conditional autoregressive value at risk (CAViaR) time series models and implied volatility. The appeal of this proposal is that it merges information from the historical time series and the different information supplied by the market's expectation of risk. Forecast‐combining methods, with weights estimated using quantile regression, are considered. We also investigate plugging implied volatility into the CAViaR models—a procedure that has not been considered in the VaR area so far. Results for daily index returns indicate that the newly proposed methods are comparable or superior to individual methods, such as the standard CAViaR models and quantiles constructed from implied volatility and the empirical distribution of standardised residuals. We find that the implied volatility has more explanatory power as the focus moves further out into the left tail of the conditional distribution of S&P 500 daily returns. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
This paper proposes a parsimonious threshold stochastic volatility (SV) model for financial asset returns. Instead of imposing a threshold value on the dynamics of the latent volatility process of the SV model, we assume that the innovation of the mean equation follows a threshold distribution in which the mean innovation switches between two regimes. In our model, the threshold is treated as an unknown parameter. We show that the proposed threshold SV model can not only capture the time‐varying volatility of returns, but can also accommodate the asymmetric shape of conditional distribution of the returns. Parameter estimation is carried out by using Markov chain Monte Carlo methods. For model selection and volatility forecast, an auxiliary particle filter technique is employed to approximate the filter and prediction distributions of the returns. Several experiments are conducted to assess the robustness of the proposed model and estimation methods. In the empirical study, we apply our threshold SV model to three return time series. The empirical analysis results show that the threshold parameter has a non‐zero value and the mean innovations belong to two separately distinct regimes. We also find that the model with an unknown threshold parameter value consistently outperforms the model with a known threshold parameter value. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
Volatility models such as GARCH, although misspecified with respect to the data‐generating process, may well generate volatility forecasts that are unconditionally unbiased. In other words, they generate variance forecasts that, on average, are equal to the integrated variance. However, many applications in finance require a measure of return volatility that is a non‐linear function of the variance of returns, rather than of the variance itself. Even if a volatility model generates forecasts of the integrated variance that are unbiased, non‐linear transformations of these forecasts will be biased estimators of the same non‐linear transformations of the integrated variance because of Jensen's inequality. In this paper, we derive an analytical approximation for the unconditional bias of estimators of non‐linear transformations of the integrated variance. This bias is a function of the volatility of the forecast variance and the volatility of the integrated variance, and depends on the concavity of the non‐linear transformation. In order to estimate the volatility of the unobserved integrated variance, we employ recent results from the realized volatility literature. As an illustration, we estimate the unconditional bias for both in‐sample and out‐of‐sample forecasts of three non‐linear transformations of the integrated standard deviation of returns for three exchange rate return series, where a GARCH(1, 1) model is used to forecast the integrated variance. Our estimation results suggest that, in practice, the bias can be substantial. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
This paper presents gamma stochastic volatility models and investigates its distributional and time series properties. The parameter estimators obtained by the method of moments are shown analytically to be consistent and asymptotically normal. The simulation results indicate that the estimators behave well. The in‐sample analysis shows that return models with gamma autoregressive stochastic volatility processes capture the leptokurtic nature of return distributions and the slowly decaying autocorrelation functions of squared stock index returns for the USA and UK. In comparison with GARCH and EGARCH models, the gamma autoregressive model picks up the persistence in volatility for the US and UK index returns but not the volatility persistence for the Canadian and Japanese index returns. The out‐of‐sample analysis indicates that the gamma autoregressive model has a superior volatility forecasting performance compared to GARCH and EGARCH models. Copyright © 2006 John Wiley _ Sons, Ltd.  相似文献   

5.
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock Exchange Composite Index and the five sectoral stock indices in Chinese stock markets for the period 4 January 2000 to 30 December 2011. We then conduct both in‐sample tests and out‐of‐sample forecasts to examine the effects of structural breaks on the performance of ARFIMAX‐FIGARCH models for the realized volatility forecast by utilizing a variety of estimation window sizes designed to accommodate potential structural breaks. The results of the in‐sample tests show that there are multiple breaks in all realized volatility series. The results of the out‐of‐sample point forecasts indicate that the combination forecasts with time‐varying weights across individual forecast models estimated with different estimation windows perform well. In particular, nonlinear combination forecasts with the weights chosen based on a non‐parametric kernel regression and linear combination forecasts with the weights chosen based on the non‐negative restricted least squares and Schwarz information criterion appear to be the most accurate methods in point forecasting for realized volatility under structural breaks. We also conduct an interval forecast of the realized volatility for the combination approaches, and find that the interval forecast for nonlinear combination approaches with the weights chosen according to a non‐parametric kernel regression performs best among the competing models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
In this paper we forecast daily returns of crypto‐currencies using a wide variety of different econometric models. To capture salient features commonly observed in financial time series like rapid changes in the conditional variance, non‐normality of the measurement errors and sharply increasing trends, we develop a time‐varying parameter VAR with t‐distributed measurement errors and stochastic volatility. To control for overparametrization, we rely on the Bayesian literature on shrinkage priors, which enables us to shrink coefficients associated with irrelevant predictors and/or perform model specification in a flexible manner. Using around one year of daily data, we perform a real‐time forecasting exercise and investigate whether any of the proposed models is able to outperform the naive random walk benchmark. To assess the economic relevance of the forecasting gains produced by the proposed models we, moreover, run a simple trading exercise.  相似文献   

7.
ARCH and GARCH models are substantially used for modelling volatility of time series data. It is proven by many studies that if variables are significantly skewed, linear versions of these models are not sufficient for both explaining the past volatility and forecasting the future volatility. In this paper, we compare the linear(GARCH(1,1)) and non‐linear(EGARCH) versions of GARCH model by using the monthly stock market returns of seven emerging countries from February 1988 to December 1996. We find that for emerging stock markets GARCH(1,1) model performs better than EGARCH model, even if stock market return series display skewed distributions. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

8.
This study is the first to examine the impacts of overnight and intraday oil futures cross-market information on predicting the US stock market volatility the high-frequency data. In-sample estimations present that high overnight oil futures RV can lead to high RV of the S&P 500. Moreover, negative overnight returns are more powerful than positive components, implying the existence of the leverage effect. From statistical and economic perspectives, out-of-sample results indicate that the decompositions of overnight oil futures and intraday RVs, based on signed intraday returns, can significantly increase the models' predictive ability. Finally, when considering the US stock market overnight effect, the decompositions are still useful to predict volatility, especially during high US stock market fluctuations and high and low EPU states.  相似文献   

9.
This paper uses high‐frequency continuous intraday electricity price data from the EPEX market to estimate and forecast realized volatility. Three different jump tests are used to break down the variation into jump and continuous components using quadratic variation theory. Several heterogeneous autoregressive models are then estimated for the logarithmic and standard deviation transformations. Generalized autoregressive conditional heteroskedasticity (GARCH) structures are included in the error terms of the models when evidence of conditional heteroskedasticity is found. Model selection is based on various out‐of‐sample criteria. Results show that decomposition of realized volatility is important for forecasting and that the decision whether to include GARCH‐type innovations might depend on the transformation selected. Finally, results are sensitive to the jump test used in the case of the standard deviation transformation.  相似文献   

10.
This article proposes intraday high‐frequency risk (HFR) measures for market risk in the case of irregularly spaced high‐frequency data. In this context, we distinguish three concepts of value‐at‐risk (VaR): the total VaR, the marginal (or per‐time‐unit) VaR and the instantaneous VaR. Since the market risk is obviously related to the duration between two consecutive trades, these measures are completed with a duration risk measure, i.e. the time‐at‐risk (TaR). We propose a forecasting procedure for VaR and TaR for each trade or other market microstructure event. Subsequently, we perform a backtesting procedure specifically designed to assess the validity of the VaR and TaR forecasts on irregularly spaced data. The performance of the HFR measure is illustrated in an empirical application for two stocks (Bank of America and Microsoft) and an exchange‐traded fund based on Standard & Poor's 500 index. We show that the intraday HFR forecasts capture accurately the volatility and duration dynamics for these three assets. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
We use real‐time macroeconomic variables and combination forecasts with both time‐varying weights and equal weights to forecast inflation in the USA. The combination forecasts compare three sets of commonly used time‐varying coefficient autoregressive models: Gaussian distributed errors, errors with stochastic volatility, and errors with moving average stochastic volatility. Both point forecasts and density forecasts suggest that models combined by equal weights do not produce worse forecasts than those with time‐varying weights. We also find that variable selection, the allowance of time‐varying lag length choice, and the stochastic volatility specification significantly improve forecast performance over standard benchmarks. Finally, when compared with the Survey of Professional Forecasters, the results of the best combination model are found to be highly competitive during the 2007/08 financial crisis.  相似文献   

12.
This paper investigates the profitability of a trading strategy, based on recurrent neural networks, that attempts to predict the direction‐of‐change of the market in the case of the NASDAQ composite index. The sample extends over the period 8 February 1971 to 7 April 1998, while the sub‐period 8 April 1998 to 5 February 2002 has been reserved for out‐of‐sample testing purposes. We demonstrate that the incorporation in the trading rule of estimates of the conditional volatility changes strongly enhances its profitability, after the inclusion of transaction costs, during bear market periods. This improvement is being measured with respect to a nested model that does not include the volatility variable as well as to a buy‐and‐hold strategy. We suggest that our findings can be justified by invoking either the ‘volatility feedback’ theory or the existence of portfolio insurance schemes in the equity markets. Our results are also consistent with the view that volatility dependence produces sign dependence. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
I examine the information content of option‐implied covariance between jumps and diffusive risk in the cross‐sectional variation in future returns. This paper documents that the difference between realized volatility and implied covariance (RV‐ICov) can predict future returns. The results show a significant and negative association of expected return and realized volatility–implied covariance spread in both the portfolio level analysis and cross‐sectional regression study. A trading strategy of buying a portfolio with the lowest RV‐ICov quintile portfolio and selling with the highest one generates positive and significant returns. This RV‐Cov anomaly is robust to controlling for size, book‐to‐market value, liquidity and systematic risk proportion. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
Recent advances in the measurement of beta (systematic return risk) and volatility (total return risk) demonstrate substantial advantages in utilizing high‐frequency return data in a variety of settings. These advances in the measurement of beta and volatility have resulted in improvements in the evaluation of alternative beta and volatility forecasting approaches. In addition, more precise measurement has also led to direct modeling of the time variation of beta and volatility. Both the realized beta and volatility literature have most commonly been modeled with an autoregressive process. In this paper we evaluate constant beta models against autoregressive models of time‐varying realized beta. We find that a constant beta model computed from daily returns over the last 12 months generates the most accurate quarterly forecast of beta and dominates the autoregressive time series forecasts. It also dominates (dramatically) the popular Fama–MacBeth constant beta model, which uses 5 years of monthly returns. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
In multivariate volatility prediction, identifying the optimal forecasting model is not always a feasible task. This is mainly due to the curse of dimensionality typically affecting multivariate volatility models. In practice only a subset of the potentially available models can be effectively estimated, after imposing severe constraints on the dynamic structure of the volatility process. It follows that in most applications the working forecasting model can be severely misspecified. This situation leaves scope for the application of forecast combination strategies as a tool for improving the predictive accuracy. The aim of the paper is to propose some alternative combination strategies and compare their performances in forecasting high‐dimensional multivariate conditional covariance matrices for a portfolio of US stock returns. In particular, we will consider the combination of volatility predictions generated by multivariate GARCH models, based on daily returns, and dynamic models for realized covariance matrices, built from intra‐daily returns. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Multifractal models have recently been introduced as a new type of data‐generating process for asset returns and other financial data. Here we propose an adaptation of this model for realized volatility. We estimate this new model via generalized method of moments and perform forecasting by means of best linear forecasts derived via the Levinson–Durbin algorithm. Its out‐of‐sample performance is compared against other popular time series specifications. Using an intra‐day dataset for five major international stock market indices, we find that the the multifractal model for realized volatility improves upon forecasts of its earlier counterparts based on daily returns and of many other volatility models. While the more traditional RV‐ARFIMA model comes out as the most successful model (in terms of the number of cases in which it has the best forecasts for all combinations of forecast horizons and evaluation criteria), the new model performs often significantly better during the turbulent times of the recent financial crisis. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper, we introduce the functional coefficient to heterogeneous autoregressive realized volatility (HAR‐RV) models to make the parameters change over time. A nonparametric statistic is developed to perform a specification test. The simulation results show that our test displays reliable size and good power. Using the proposed test, we find a significant time variation property of coefficients to the HAR‐RV models. Time‐varying parameter (TVP) models can significantly outperform their constant‐coefficient counterparts for longer forecasting horizons. The predictive ability of TVP models can be improved by accounting for VIX information. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
This paper examines the long‐run relationship between implied and realised volatility for a sample of 16 FTSE‐100 stocks. We find strong evidence of long‐memory, fractional integration in equity volatility and show that this long‐memory characteristic is not an outcome of structural breaks experienced during the sample period. Fractional cointegration between the implied and realised volatility is shown using recently developed rank cointegration tests by Robinson and Yajima (2002). The predictive ability of individual equity options is also examined and composite implied volatility estimates are shown to contain information on future idiosyncratic or stock‐specific risk that is not captured using popular statistical approaches. Implied volatilities on individual UK equities are thus closely related to realised volatility and are an effective forecasting method particularly over medium forecasting horizons. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
Inspired by the commonly held view that international stock market volatility is equivalent to cross-market information flow, we propose various ways of constructing two types of information flow, based on realized volatility (RV) and implied volatility (IV), in multiple international markets. We focus on the RVs derived from the intraday prices of eight international stock markets and use a heterogeneous autoregressive framework to forecast the future volatility of each market for 1 day to 22 days ahead. Our Diebold-Mariano tests provide strong evidence that information flow with IV enhances the accuracy of forecasting international RVs over all of the prediction horizons. The results of a model confidence set test show that a market's own IV and the first principal component of the international IVs exhibit the strongest predictive ability. In addition, the use of information flows with IV can further increase economic returns. Our results are supported by the findings of a wide range of robustness checks.  相似文献   

20.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号