首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper assesses the informational content of alternative realized volatility estimators, daily range and implied volatility in multi‐period out‐of‐sample Value‐at‐Risk (VaR) predictions. We use the recently proposed Realized GARCH model combined with the skewed Student's t distribution for the innovations process and a Monte Carlo simulation approach in order to produce the multi‐period VaR estimates. Our empirical findings, based on the S&P 500 stock index, indicate that almost all realized and implied volatility measures can produce statistically and regulatory precise VaR forecasts across forecasting horizons, with the implied volatility being especially accurate in monthly VaR forecasts. The daily range produces inferior forecasting results in terms of regulatory accuracy and Basel II compliance. However, robust realized volatility measures, which are immune against microstructure noise bias or price jumps, generate superior VaR estimates in terms of capital efficiency, as they minimize the opportunity cost of capital and the Basel II regulatory capital. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
We propose a method approach. We use six international stock price indices and three hypothetical portfolios formed by these indices. The sample was observed daily from 1 January 1996 to 31 December 2006. Confirmed by the failure rates and backtesting developed by Kupiec (Technique for verifying the accuracy of risk measurement models. Journal of Derivatives 1995; 3 : 73–84) and Christoffersen (Evaluating interval forecasts. International Economic Review 1998; 39 : 841–862), the empirical results show that our method can considerably improve the estimation accuracy of value‐at‐risk. Thus the study establishes an effective alternative model for risk prediction and hence also provides a reliable tool for the management of portfolios. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
Value‐at‐Risk (VaR) is widely used as a tool for measuring the market risk of asset portfolios. However, alternative VaR implementations are known to yield fairly different VaR forecasts. Hence, every use of VaR requires choosing among alternative forecasting models. This paper undertakes two case studies in model selection, for the S&P 500 index and India's NSE‐50 index, at the 95% and 99% levels. We employ a two‐stage model selection procedure. In the first stage we test a class of models for statistical accuracy. If multiple models survive rejection with the tests, we perform a second stage filtering of the surviving models using subjective loss functions. This two‐stage model selection procedure does prove to be useful in choosing a VaR model, while only incompletely addressing the problem. These case studies give us some evidence about the strengths and limitations of present knowledge on estimation and testing for VaR. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

4.
This article proposes intraday high‐frequency risk (HFR) measures for market risk in the case of irregularly spaced high‐frequency data. In this context, we distinguish three concepts of value‐at‐risk (VaR): the total VaR, the marginal (or per‐time‐unit) VaR and the instantaneous VaR. Since the market risk is obviously related to the duration between two consecutive trades, these measures are completed with a duration risk measure, i.e. the time‐at‐risk (TaR). We propose a forecasting procedure for VaR and TaR for each trade or other market microstructure event. Subsequently, we perform a backtesting procedure specifically designed to assess the validity of the VaR and TaR forecasts on irregularly spaced data. The performance of the HFR measure is illustrated in an empirical application for two stocks (Bank of America and Microsoft) and an exchange‐traded fund based on Standard & Poor's 500 index. We show that the intraday HFR forecasts capture accurately the volatility and duration dynamics for these three assets. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
This paper proposes a new evaluation framework for interval forecasts. Our model‐free test can be used to evaluate interval forecasts and high‐density regions, potentially discontinuous and/or asymmetric. Using a simple J‐statistic, based on the moments defined by the orthonormal polynomials associated with the binomial distribution, this new approach presents many advantages. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypotheses. Third, Monte Carlo simulations show that for realistic sample sizes our GMM test has good small‐sample properties. These results are corroborated by an empirical application on SP500 and Nikkei stock market indexes. It confirms that using this GMM test leads to major consequences for the ex post evaluation of interval forecasts produced by linear versus nonlinear models. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
The vector multiplicative error model (vector MEM) is capable of analyzing and forecasting multidimensional non‐negative valued processes. Usually its parameters are estimated by generalized method of moments (GMM) and maximum likelihood (ML) methods. However, the estimations could be heavily affected by outliers. To overcome this problem, in this paper an alternative approach, the weighted empirical likelihood (WEL) method, is proposed. This method uses moment conditions as constraints and the outliers are detected automatically by performing a k‐means clustering on Oja depth values of innovations. The performance of WEL is evaluated against those of GMM and ML methods through extensive simulations, in which three different kinds of additive outliers are considered. Moreover, the robustness of WEL is demonstrated by comparing the volatility forecasts of the three methods on 10‐minute returns of the S&P 500 index. The results from both the simulations and the S&P 500 volatility forecasts have shown preferences in using the WEL method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
Bayesian methods for assessing the accuracy of dynamic financial value‐at‐risk (VaR) forecasts have not been considered in the literature. Such methods are proposed in this paper. Specifically, Bayes factor analogues of popular frequentist tests for independence of violations from, and for correct coverage of a time series of, dynamic quantile forecasts are developed. To evaluate the relevant marginal likelihoods, analytic integration methods are utilized when possible; otherwise multivariate adaptive quadrature methods are employed to estimate the required quantities. The usual Bayesian interval estimate for a proportion is also examined in this context. The size and power properties of the proposed methods are examined via a simulation study, illustrating favourable comparisons both overall and with their frequentist counterparts. An empirical study employs the proposed methods, in comparison with standard tests, to assess the adequacy of a range of forecasting models for VaR in several financial market data series. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
The variance of a portfolio can be forecast using a single index model or the covariance matrix of the portfolio. Using univariate and multivariate conditional volatility models, this paper evaluates the performance of the single index and portfolio models in forecasting value‐at‐risk (VaR) thresholds of a portfolio. Likelihood ratio tests of unconditional coverage, independence and conditional coverage of the VaR forecasts suggest that the single‐index model leads to excessive and often serially dependent violations, while the portfolio model leads to too few violations. The single‐index model also leads to lower daily Basel Accord capital charges. The univariate models which display correct conditional coverage lead to higher capital charges than models which lead to too many violations. Overall, the Basel Accord penalties appear to be too lenient and favour models which have too many violations. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
We investigate the predictive performance of various classes of value‐at‐risk (VaR) models in several dimensions—unfiltered versus filtered VaR models, parametric versus nonparametric distributions, conventional versus extreme value distributions, and quantile regression versus inverting the conditional distribution function. By using the reality check test of White (2000), we compare the predictive power of alternative VaR models in terms of the empirical coverage probability and the predictive quantile loss for the stock markets of five Asian economies that suffered from the 1997–1998 financial crisis. The results based on these two criteria are largely compatible and indicate some empirical regularities of risk forecasts. The Riskmetrics model behaves reasonably well in tranquil periods, while some extreme value theory (EVT)‐based models do better in the crisis period. Filtering often appears to be useful for some models, particularly for the EVT models, though it could be harmful for some other models. The CaViaR quantile regression models of Engle and Manganelli (2004) have shown some success in predicting the VaR risk measure for various periods, generally more stable than those that invert a distribution function. Overall, the forecasting performance of the VaR models considered varies over the three periods before, during and after the crisis. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

10.
This paper adopts the backtesting criteria of the Basle Committee to compare the performance of a number of simple Value‐at‐Risk (VaR) models. These criteria provide a new standard on forecasting accuracy. Currently central banks in major money centres, under the auspices of the Basle Committee of the Bank of International settlement, adopt the VaR system to evaluate the market risk of their supervised banks. Banks are required to report VaRs to bank regulators with their internal models. These models must comply with Basle's backtesting criteria. If a bank fails the VaR backtesting, higher capital requirements will be imposed. VaR is a function of volatility forecasts. Past studies mostly conclude that ARCH and GARCH models provide better volatility forecasts. However, this paper finds that ARCH‐ and GARCH‐based VaR models consistently fail to meet Basle's backtesting criteria. These findings suggest that the use of ARCH‐ and GARCH‐based models to forecast their VaRs is not a reliable way to manage a bank's market risk. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

11.
Accurate modelling of volatility (or risk) is important in finance, particularly as it relates to the modelling and forecasting of value‐at‐risk (VaR) thresholds. As financial applications typically deal with a portfolio of assets and risk, there are several multivariate GARCH models which specify the risk of one asset as depending on its own past as well as the past behaviour of other assets. Multivariate effects, whereby the risk of a given asset depends on the previous risk of any other asset, are termed spillover effects. In this paper we analyse the importance of considering spillover effects when forecasting financial volatility. The forecasting performance of the VARMA‐GARCH model of Ling and McAleer (2003), which includes spillover effects from all assets, the CCC model of Bollerslev (1990), which includes no spillovers, and a new Portfolio Spillover GARCH (PS‐GARCH) model, which accommodates aggregate spillovers parsimoniously and hence avoids the so‐called curse of dimensionality, are compared using a VaR example for a portfolio containing four international stock market indices. The empirical results suggest that spillover effects are statistically significant. However, the VaR threshold forecasts are generally found to be insensitive to the inclusion of spillover effects in any of the multivariate models considered. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
Value‐at‐risk (VaR) forecasting generally relies on a parametric density function of portfolio returns that ignores higher moments or assumes them constant. In this paper, we propose a simple approach to forecasting of a portfolio VaR. We employ the Gram‐Charlier expansion (GCE) augmenting the standard normal distribution with the first four moments, which are allowed to vary over time. In an extensive empirical study, we compare the GCE approach to other models of VaR forecasting and conclude that it provides accurate and robust estimates of the realized VaR. In spite of its simplicity, on our dataset GCE outperforms other estimates that are generated by both constant and time‐varying higher‐moments models. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
This paper shows that out‐of‐sample forecast comparisons can help prevent data mining‐induced overfitting. The basic results are drawn from simulations of a simple Monte Carlo design and a real data‐based design similar to those used in some previous studies. In each simulation, a general‐to‐specific procedure is used to arrive at a model. If the selected specification includes any of the candidate explanatory variables, forecasts from the model are compared to forecasts from a benchmark model that is nested within the selected model. In particular, the competing forecasts are tested for equal MSE and encompassing. The simulations indicate most of the post‐sample tests are roughly correctly sized. Moreover, the tests have relatively good power, although some are consistently more powerful than others. The paper concludes with an application, modelling quarterly US inflation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, we investigate the performance of a class of M‐estimators for both symmetric and asymmetric conditional heteroscedastic models in the prediction of value‐at‐risk. The class of estimators includes the least absolute deviation (LAD), Huber's, Cauchy and B‐estimator, as well as the well‐known quasi maximum likelihood estimator (QMLE). We use a wide range of summary statistics to compare both the in‐sample and out‐of‐sample VaR estimates of three well‐known stock indices. Our empirical study suggests that in general Cauchy, Huber and B‐estimator have better performance in predicting one‐step‐ahead VaR than the commonly used QMLE. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
A risk management strategy designed to be robust to the global financial crisis (GFC), in the sense of selecting a value‐at‐risk (VaR) forecast that combines the forecasts of different VaR models, was proposed by McAleer and coworkers in 2010. The robust forecast is based on the median of the point VaR forecasts of a set of conditional volatility models. Such a risk management strategy is robust to the GFC in the sense that, while maintaining the same risk management strategy before, during and after a financial crisis, it will lead to comparatively low daily capital charges and violation penalties for the entire period. This paper presents evidence to support the claim that the median point forecast of VaR is generally GFC robust. We investigate the performance of a variety of single and combined VaR forecasts in terms of daily capital requirements and violation penalties under the Basel II Accord, as well as other criteria. In the empirical analysis we choose several major indexes, namely French CAC, German DAX, US Dow Jones, UK FTSE100, Hong Kong Hang Seng, Spanish Ibex 35, Japanese Nikkei, Swiss SMI and US S&P 500. The GARCH, EGARCH, GJR and RiskMetrics models as well as several other strategies, are used in the comparison. Backtesting is performed on each of these indexes using the Basel II Accord regulations for 2008–10 to examine the performance of the median strategy in terms of the number of violations and daily capital charges, among other criteria. The median is shown to be a profitable and safe strategy for risk management, both in calm and turbulent periods, as it provides a reasonable number of violations and daily capital charges. The median also performs well when both total losses and the asymmetric linear tick loss function are considered Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

16.
This study establishes a benchmark for short‐term salmon price forecasting. The weekly spot price of Norwegian farmed Atlantic salmon is predicted 1–5 weeks ahead using data from 2007 to 2014. Sixteen alternative forecasting methods are considered, ranging from classical time series models to customized machine learning techniques to salmon futures prices. The best predictions are delivered by k‐nearest neighbors method for 1 week ahead; vector error correction model estimated using elastic net regularization for 2 and 3 weeks ahead; and futures prices for 4 and 5 weeks ahead. While the nominal gains in forecast accuracy over a naïve benchmark are small, the economic value of the forecasts is considerable. Using a simple trading strategy for timing the sales based on price forecasts could increase the net profit of a salmon farmer by around 7%.  相似文献   

17.
In this paper we present results of a simulation study to assess and compare the accuracy of forecasting techniques for long‐memory processes in small sample sizes. We analyse differences between adaptive ARMA(1,1) L‐step forecasts, where the parameters are estimated by minimizing the sum of squares of L‐step forecast errors, and forecasts obtained by using long‐memory models. We compare widths of the forecast intervals for both methods, and discuss some computational issues associated with the ARMA(1,1) method. Our results illustrate the importance and usefulness of long‐memory models for multi‐step forecasting. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
We compare linear autoregressive (AR) models and self‐exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two‐regime SETAR process is used as the data‐generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non‐linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

19.
We extend the analysis of Christoffersen and Diebold (1998) on long‐run forecasting in cointegrated systems to multicointegrated systems. For the forecast evaluation we consider several loss functions, each of which has a particular interpretation in the context of stock‐flow models where multicointegration typically occurs. A loss function based on a standard mean square forecast error (MSFE) criterion focuses on the forecast errors of the flow variables alone. Likewise, a loss function based on the triangular representation of cointegrated systems (suggested by Christoffersen and Diebold) considers forecast errors associated with changes in both stock (modelled through the cointegrating restrictions) and flow variables. We suggest a new loss function based on the triangular representation of multicointegrated systems which further penalizes deviations from the long‐run relationship between the levels of stock and flow variables as well as changes in the flow variables. Among other things, we show that if one is concerned with all possible long‐run relations between stock and flow variables, this new loss function entails high and increasing forecasting gains compared to both the standard MSFE criterion and Christoffersen and Diebold's criterion. This paper demonstrates the importance of carefully selecting loss functions in forecast evaluation of models involving stock and flow variables. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

20.
This paper examines the problem of how to validate multiple‐period density forecasting models. Such models are more difficult to validate than their single‐period equivalents, because consecutive observations are subject to common shocks that undermine i.i.d. The paper examines various solutions to this problem, and proposes a new solution based on the application of standard tests to a resample that is constructed to be i.i.d. It suggests that this solution is superior to alternatives, and presents results indicating that tests based on the i.i.d. resample approach have good power. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号