首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper proposes value‐at risk (VaR) estimation methods that are a synthesis of conditional autoregressive value at risk (CAViaR) time series models and implied volatility. The appeal of this proposal is that it merges information from the historical time series and the different information supplied by the market's expectation of risk. Forecast‐combining methods, with weights estimated using quantile regression, are considered. We also investigate plugging implied volatility into the CAViaR models—a procedure that has not been considered in the VaR area so far. Results for daily index returns indicate that the newly proposed methods are comparable or superior to individual methods, such as the standard CAViaR models and quantiles constructed from implied volatility and the empirical distribution of standardised residuals. We find that the implied volatility has more explanatory power as the focus moves further out into the left tail of the conditional distribution of S&P 500 daily returns. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
Value‐at‐risk (VaR) forecasting via a computational Bayesian framework is considered. A range of parametric models is compared, including standard, threshold nonlinear and Markov switching generalized autoregressive conditional heteroskedasticity (GARCH) specifications, plus standard and nonlinear stochastic volatility models, most considering four error probability distributions: Gaussian, Student‐t, skewed‐t and generalized error distribution. Adaptive Markov chain Monte Carlo methods are employed in estimation and forecasting. A portfolio of four Asia–Pacific stock markets is considered. Two forecasting periods are evaluated in light of the recent global financial crisis. Results reveal that: (i) GARCH models outperformed stochastic volatility models in almost all cases; (ii) asymmetric volatility models were clearly favoured pre crisis, while at the 1% level during and post crisis, for a 1‐day horizon, models with skewed‐t errors ranked best, while integrated GARCH models were favoured at the 5% level; (iii) all models forecast VaR less accurately and anti‐conservatively post crisis. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
Accurate modelling of volatility (or risk) is important in finance, particularly as it relates to the modelling and forecasting of value‐at‐risk (VaR) thresholds. As financial applications typically deal with a portfolio of assets and risk, there are several multivariate GARCH models which specify the risk of one asset as depending on its own past as well as the past behaviour of other assets. Multivariate effects, whereby the risk of a given asset depends on the previous risk of any other asset, are termed spillover effects. In this paper we analyse the importance of considering spillover effects when forecasting financial volatility. The forecasting performance of the VARMA‐GARCH model of Ling and McAleer (2003), which includes spillover effects from all assets, the CCC model of Bollerslev (1990), which includes no spillovers, and a new Portfolio Spillover GARCH (PS‐GARCH) model, which accommodates aggregate spillovers parsimoniously and hence avoids the so‐called curse of dimensionality, are compared using a VaR example for a portfolio containing four international stock market indices. The empirical results suggest that spillover effects are statistically significant. However, the VaR threshold forecasts are generally found to be insensitive to the inclusion of spillover effects in any of the multivariate models considered. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

4.
We investigate the predictive performance of various classes of value‐at‐risk (VaR) models in several dimensions—unfiltered versus filtered VaR models, parametric versus nonparametric distributions, conventional versus extreme value distributions, and quantile regression versus inverting the conditional distribution function. By using the reality check test of White (2000), we compare the predictive power of alternative VaR models in terms of the empirical coverage probability and the predictive quantile loss for the stock markets of five Asian economies that suffered from the 1997–1998 financial crisis. The results based on these two criteria are largely compatible and indicate some empirical regularities of risk forecasts. The Riskmetrics model behaves reasonably well in tranquil periods, while some extreme value theory (EVT)‐based models do better in the crisis period. Filtering often appears to be useful for some models, particularly for the EVT models, though it could be harmful for some other models. The CaViaR quantile regression models of Engle and Manganelli (2004) have shown some success in predicting the VaR risk measure for various periods, generally more stable than those that invert a distribution function. Overall, the forecasting performance of the VaR models considered varies over the three periods before, during and after the crisis. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

5.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

6.
The variance of a portfolio can be forecast using a single index model or the covariance matrix of the portfolio. Using univariate and multivariate conditional volatility models, this paper evaluates the performance of the single index and portfolio models in forecasting value‐at‐risk (VaR) thresholds of a portfolio. Likelihood ratio tests of unconditional coverage, independence and conditional coverage of the VaR forecasts suggest that the single‐index model leads to excessive and often serially dependent violations, while the portfolio model leads to too few violations. The single‐index model also leads to lower daily Basel Accord capital charges. The univariate models which display correct conditional coverage lead to higher capital charges than models which lead to too many violations. Overall, the Basel Accord penalties appear to be too lenient and favour models which have too many violations. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
Using the generalized dynamic factor model, this study constructs three predictors of crude oil price volatility: a fundamental (physical) predictor, a financial predictor, and a macroeconomic uncertainty predictor. Moreover, an event‐triggered predictor is constructed using data extracted from Google Trends. We construct GARCH‐MIDAS (generalized autoregressive conditional heteroskedasticity–mixed‐data sampling) models combining realized volatility with the predictors to predict oil price volatility at different forecasting horizons. We then identify the predictive power of the realized volatility and the predictors by the model confidence set (MCS) test. The findings show that, among the four indexes, the financial predictor has the most predictive power for crude oil volatility, which provides strong evidence that financialization has been the key determinant of crude oil price behavior since the 2008 global financial crisis. In addition, the fundamental predictor, followed by the financial predictor, effectively forecasts crude oil price volatility in the long‐run forecasting horizons. Our findings indicate that the different predictors can provide distinct predictive information at the different horizons given the specific market situation. These findings have useful implications for market traders in terms of managing crude oil price risk.  相似文献   

8.
This paper adopts the backtesting criteria of the Basle Committee to compare the performance of a number of simple Value‐at‐Risk (VaR) models. These criteria provide a new standard on forecasting accuracy. Currently central banks in major money centres, under the auspices of the Basle Committee of the Bank of International settlement, adopt the VaR system to evaluate the market risk of their supervised banks. Banks are required to report VaRs to bank regulators with their internal models. These models must comply with Basle's backtesting criteria. If a bank fails the VaR backtesting, higher capital requirements will be imposed. VaR is a function of volatility forecasts. Past studies mostly conclude that ARCH and GARCH models provide better volatility forecasts. However, this paper finds that ARCH‐ and GARCH‐based VaR models consistently fail to meet Basle's backtesting criteria. These findings suggest that the use of ARCH‐ and GARCH‐based models to forecast their VaRs is not a reliable way to manage a bank's market risk. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
Value‐at‐risk (VaR) is a standard measure of market risk in financial markets. This paper proposes a novel, adaptive and efficient method to forecast both volatility and VaR. Extending existing exponential smoothing as well as GARCH formulations, the method is motivated from an asymmetric Laplace distribution, where skewness and heavy tails in return distributions, and their potentially time‐varying nature, are taken into account. The proposed volatility equation also involves novel time‐varying dynamics. Back‐testing results illustrate that the proposed method offers a viable, and more accurate, though conservative, improvement in forecasting VaR compared to a range of popular alternatives. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
The increase in oil price volatility in recent years has raised the importance of forecasting it accurately for valuing and hedging investments. The paper models and forecasts the crude oil exchange‐traded funds (ETF) volatility index, which has been used in the last years as an important alternative measure to track and analyze the volatility of future oil prices. Analysis of the oil volatility index suggests that it presents features similar to those of the daily market volatility index, such as long memory, which is modeled using well‐known heterogeneous autoregressive (HAR) specifications and new extensions that are based on net and scaled measures of oil price changes. The aim is to improve the forecasting performance of the traditional HAR models by including predictors that capture the impact of oil price changes on the economy. The performance of the new proposals and benchmarks is evaluated with the model confidence set (MCS) and the Generalized‐AutoContouR (G‐ACR) tests in terms of point forecasts and density forecasting, respectively. We find that including the leverage in the conditional mean or variance of the basic HAR model increases its predictive ability. Furthermore, when considering density forecasting, the best models are a conditional heteroskedastic HAR model that includes a scaled measure of oil price changes, and a HAR model with errors following an exponential generalized autoregressive conditional heteroskedasticity specification. In both cases, we consider a flexible distribution for the errors of the conditional heteroskedastic process.  相似文献   

11.
In recent years, considerable attention has focused on modelling and forecasting stock market volatility. Stock market volatility matters because stock markets are an integral part of the financial architecture in market economies and play a key role in channelling funds from savers to investors. The focus of this paper is on forecasting stock market volatility in Central and East European (CEE) countries. The obvious question to pose, therefore, is how volatility can be forecast and whether one technique consistently outperforms other techniques. Over the years a variety of techniques have been developed, ranging from the relatively simple to the more complex conditional heteroscedastic models of the GARCH family. In this paper we test the predictive power of 12 models to forecast volatility in the CEE countries. Our results confirm that models which allow for asymmetric volatility consistently outperform all other models considered. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
This paper provides clear‐cut evidence that the out‐of‐sample VaR (value‐at‐risk) forecasting performance of alternative parametric volatility models, like EGARCH (exponential general autoregressive conditional heteroskedasticity) or GARCH, and Markov regime‐switching models, can be considerably improved if they are combined with skewed distributions of asset return innovations. The performance of these models is found to be similar to that of the EVT (extreme value theory) approach. The performance of the latter approach can also be improved if asset return innovations are assumed to be skewed distributed. The performance of the Markov regime‐switching model is considerably improved if this model allows for EGARCH effects, for all different volatility regimes considered. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
The availability of numerous modeling approaches for volatility forecasting leads to model uncertainty for both researchers and practitioners. A large number of studies provide evidence in favor of combination methods for forecasting a variety of financial variables, but most of them are implemented on returns forecasting and evaluate their performance based solely on statistical evaluation criteria. In this paper, we combine various volatility forecasts based on different combination schemes and evaluate their performance in forecasting the volatility of the S&P 500 index. We use an exhaustive variety of combination methods to forecast volatility, ranging from simple techniques to time-varying techniques based on the past performance of the single models and regression techniques. We then evaluate the forecasting performance of single and combination volatility forecasts based on both statistical and economic loss functions. The empirical analysis in this paper yields an important conclusion. Although combination forecasts based on more complex methods perform better than the simple combinations and single models, there is no dominant combination technique that outperforms the rest in both statistical and economic terms.  相似文献   

14.
This paper investigates inference and volatility forecasting using a Markov switching heteroscedastic model with a fat‐tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. The motivation for extending the Markov switching GARCH model, previously developed to capture mean asymmetry, is that the switching variable, assumed to be a first‐order Markov process, is unobserved. The proposed model extends this work to incorporate Markov switching in the mean and variance simultaneously. Parameter estimation and inference are performed in a Bayesian framework via a Markov chain Monte Carlo scheme. We compare competing models using Bayesian forecasting in a comparative value‐at‐risk study. The proposed methods are illustrated using both simulations and eight international stock market return series. The results generally favor the proposed double Markov switching GARCH model with an exogenous variable. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
A risk management strategy designed to be robust to the global financial crisis (GFC), in the sense of selecting a value‐at‐risk (VaR) forecast that combines the forecasts of different VaR models, was proposed by McAleer and coworkers in 2010. The robust forecast is based on the median of the point VaR forecasts of a set of conditional volatility models. Such a risk management strategy is robust to the GFC in the sense that, while maintaining the same risk management strategy before, during and after a financial crisis, it will lead to comparatively low daily capital charges and violation penalties for the entire period. This paper presents evidence to support the claim that the median point forecast of VaR is generally GFC robust. We investigate the performance of a variety of single and combined VaR forecasts in terms of daily capital requirements and violation penalties under the Basel II Accord, as well as other criteria. In the empirical analysis we choose several major indexes, namely French CAC, German DAX, US Dow Jones, UK FTSE100, Hong Kong Hang Seng, Spanish Ibex 35, Japanese Nikkei, Swiss SMI and US S&P 500. The GARCH, EGARCH, GJR and RiskMetrics models as well as several other strategies, are used in the comparison. Backtesting is performed on each of these indexes using the Basel II Accord regulations for 2008–10 to examine the performance of the median strategy in terms of the number of violations and daily capital charges, among other criteria. The median is shown to be a profitable and safe strategy for risk management, both in calm and turbulent periods, as it provides a reasonable number of violations and daily capital charges. The median also performs well when both total losses and the asymmetric linear tick loss function are considered Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
Recent research has suggested that forecast evaluation on the basis of standard statistical loss functions could prefer models which are sub‐optimal when used in a practical setting. This paper explores a number of statistical models for predicting the daily volatility of several key UK financial time series. The out‐of‐sample forecasting performance of various linear and GARCH‐type models of volatility are compared with forecasts derived from a multivariate approach. The forecasts are evaluated using traditional metrics, such as mean squared error, and also by how adequately they perform in a modern risk management setting. We find that the relative accuracies of the various methods are highly sensitive to the measure used to evaluate them. Such results have implications for any econometric time series forecasts which are subsequently employed in financial decision making. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

17.
This paper assesses the informational content of alternative realized volatility estimators, daily range and implied volatility in multi‐period out‐of‐sample Value‐at‐Risk (VaR) predictions. We use the recently proposed Realized GARCH model combined with the skewed Student's t distribution for the innovations process and a Monte Carlo simulation approach in order to produce the multi‐period VaR estimates. Our empirical findings, based on the S&P 500 stock index, indicate that almost all realized and implied volatility measures can produce statistically and regulatory precise VaR forecasts across forecasting horizons, with the implied volatility being especially accurate in monthly VaR forecasts. The daily range produces inferior forecasting results in terms of regulatory accuracy and Basel II compliance. However, robust realized volatility measures, which are immune against microstructure noise bias or price jumps, generate superior VaR estimates in terms of capital efficiency, as they minimize the opportunity cost of capital and the Basel II regulatory capital. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
We propose a nonlinear time series model where both the conditional mean and the conditional variance are asymmetric functions of past information. The model is particularly useful for analysing financial time series where it has been noted that there is an asymmetric impact of good news and bad news on volatility (risk) transmission. We introduce a coherent framework for testing asymmetries in the conditional mean and the conditional variance, separately or jointly. To this end we derive both a Wald and a Lagrange multiplier test. Some of the new asymmetric model's moment properties are investigated. Detailed empirical results are given for the daily returns of the composite index of the New York Stock Exchange. There is strong evidence of asymmetry in both the conditional mean and the conditional variance functions. In a genuine out‐of‐sample forecasting experiment the performance of the best fitted asymmetric model, having asymmetries in both conditional mean and conditional variance, is compared with an asymmetric model for the conditional mean, and with no‐change forecasts. This is done both in terms of conditional mean forecasting as well as in terms of risk forecasting. Finally, the paper presents some evidence of asymmetries in the index stock returns of the Group of Seven (G7) industrialized countries. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
Estimation of the value at risk (VaR) requires prediction of the future volatility. Whereas this is a simple task in ARCH and related models, it becomes much more complicated in stochastic volatility (SV) processes where the volatility is a function of a latent variable that is not observable. In-sample (present and past values) and out-of-sample (future values) predictions of that unobservable variable are thus necessary. This paper proposes singular spectrum analysis (SSA), which is a fully nonparametric technique that can be used for both purposes. A combination of traditional forecasting techniques and SSA is also considered to estimate the VaR. Their performance is assessed in an extensive Monte Carlo and with an application to a daily series of S&P500 returns.  相似文献   

20.
Since volatility is perceived as an explicit measure of risk, financial economists have long been concerned with accurate measures and forecasts of future volatility and, undoubtedly, the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model has been widely used for doing so. It appears, however, from some empirical studies that the GARCH model tends to provide poor volatility forecasts in the presence of additive outliers. To overcome the forecasting limitation, this paper proposes a robust GARCH model (RGARCH) using least absolute deviation estimation and introduces a valuable estimation method from a practical point of view. Extensive Monte Carlo experiments substantiate our conjectures. As the magnitude of the outliers increases, the one‐step‐ahead forecasting performance of the RGARCH model has a more significant improvement in two forecast evaluation criteria over both the standard GARCH and random walk models. Strong evidence in favour of the RGARCH model over other competitive models is based on empirical application. By using a sample of two daily exchange rate series, we find that the out‐of‐sample volatility forecasts of the RGARCH model are apparently superior to those of other competitive models. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号