首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper proposes value‐at risk (VaR) estimation methods that are a synthesis of conditional autoregressive value at risk (CAViaR) time series models and implied volatility. The appeal of this proposal is that it merges information from the historical time series and the different information supplied by the market's expectation of risk. Forecast‐combining methods, with weights estimated using quantile regression, are considered. We also investigate plugging implied volatility into the CAViaR models—a procedure that has not been considered in the VaR area so far. Results for daily index returns indicate that the newly proposed methods are comparable or superior to individual methods, such as the standard CAViaR models and quantiles constructed from implied volatility and the empirical distribution of standardised residuals. We find that the implied volatility has more explanatory power as the focus moves further out into the left tail of the conditional distribution of S&P 500 daily returns. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

2.
We study intraday return volatility dynamics using a time‐varying components approach, and the method is applied to analyze IBM intraday returns. Empirical evidence indicates that with three additive components—a time‐varying mean of absolute returns and two cosine components with time‐varying amplitudes—together they capture very well the pronounced periodicity and persistence behaviors exhibited in the empirical autocorrelation pattern of IBM returns. We find that the long‐run volatility persistence is driven predominantly by daily level shifts in mean absolute returns. After adjusting for these intradaily components, the filtered returns behave much like a Gaussian noise, suggesting that the three‐components structure is adequately specified. Furthermore, a new volatility measure (TCV) can be constructed from these components. Results from extensive out‐of‐sample rolling forecast experiments suggest that TCV fares well in predicting future volatility against alternative methods, including GARCH model, realized volatility and realized absolute value. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
In this study we propose several new variables, such as continuous realized semi‐variance and signed jump variations including jump tests, and construct a new heterogeneous autoregressive model for realized volatility models to investigate the impacts that those new variables have on forecasting oil price volatility. In‐sample results indicate that past negative returns have greater effects on future volatility than that of positive returns, and our new signed jump variations have a significantly negative influence on the future volatility. Out‐of‐sample empirical results with several robust checks demonstrate that our proposed models can not only obtain better performance in forecasting volatility but also garner larger economic values than can the existing models discussed in this paper.  相似文献   

4.
This paper investigates inference and volatility forecasting using a Markov switching heteroscedastic model with a fat‐tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. The motivation for extending the Markov switching GARCH model, previously developed to capture mean asymmetry, is that the switching variable, assumed to be a first‐order Markov process, is unobserved. The proposed model extends this work to incorporate Markov switching in the mean and variance simultaneously. Parameter estimation and inference are performed in a Bayesian framework via a Markov chain Monte Carlo scheme. We compare competing models using Bayesian forecasting in a comparative value‐at‐risk study. The proposed methods are illustrated using both simulations and eight international stock market return series. The results generally favor the proposed double Markov switching GARCH model with an exogenous variable. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
This paper proposes Markov chain Monte Carlo methods to estimate the parameters and log durations of the correlated or asymmetric stochastic conditional duration models. Following the literature, instead of fitting the models directly, the observation equation of the models is first subjected to a logarithmic transformation. A correlation is then introduced between the transformed innovation and the latent process in an attempt to improve the statistical fits of the models. In order to perform one‐step‐ahead in‐sample and out‐of‐sample duration forecasts, an auxiliary particle filter is used to approximate the filter distributions of the latent states. Simulation studies and application to the IBM transaction dataset illustrate that our proposed estimation methods work well in terms of parameter and log duration estimation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
The aim of this paper is to compare the forecasting performance of competing threshold models, in order to capture the asymmetric effect in the volatility. We focus on examining the relative out‐of‐sample forecasting ability of the SETAR‐Threshold GARCH (SETAR‐TGARCH) and the SETAR‐Threshold Stochastic Volatility (SETAR‐THSV) models compared to the GARCH model and Stochastic Volatility (SV) model. However, the main problem in evaluating the predictive ability of volatility models is that the ‘true’ underlying volatility process is not observable and thus a proxy must be defined for the unobservable volatility. For the class of nonlinear state space models (SETAR‐THSV and SV), a modified version of the SIR algorithm has been used to estimate the unknown parameters. The forecasting performance of competing models has been compared for two return time series: IBEX 35 and S&P 500. We explore whether the increase in the complexity of the model implies that its forecasting ability improves. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
This study investigates the forecasting performance of the GARCH(1,1) model by adding an effective covariate. Based on the assumption that many volatility predictors are available to help forecast the volatility of a target variable, this study shows how to construct a covariate from these predictors and plug it into the GARCH(1,1) model. This study presents a method of building a covariate such that the covariate contains the maximum possible amount of predictor information of the predictors for forecasting volatility. The loading of the covariate constructed by the proposed method is simply the eigenvector of a matrix. The proposed method enjoys the advantages of easy implementation and interpretation. Simulations and empirical analysis verify that the proposed method performs better than other methods for forecasting the volatility, and the results are quite robust to model misspecification. Specifically, the proposed method reduces the mean square error of the GARCH(1,1) model by 30% for forecasting the volatility of S&P 500 Index. The proposed method is also useful in improving the volatility forecasting of several GARCH‐family models and for forecasting the value‐at‐risk. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
This paper is concerned with model averaging estimation for conditional volatility models. Given a set of candidate models with different functional forms, we propose a model averaging estimator and forecast for conditional volatility, and construct the corresponding weight-choosing criterion. Under some regulatory conditions, we show that the weight selected by the criterion asymptotically minimizes the true Kullback–Leibler divergence, which is the distributional approximation error, as well as the Itakura–Saito distance, which is the distance between the true and estimated or forecast conditional volatility. Monte Carlo experiments support our newly proposed method. As for the empirical applications of our method, we investigate a total of nine major stock market indices and make a 1-day-ahead volatility forecast for each data set. Empirical results show that the model averaging forecast achieves the highest accuracy in terms of all types of loss functions in most cases, which captures the movement of the unknown true conditional volatility.  相似文献   

9.
We propose in this paper a threshold nonlinearity test for financial time series. Our approach adopts reversible‐jump Markov chain Monte Carlo methods to calculate the posterior probabilities of two competitive models, namely GARCH and threshold GARCH models. Posterior evidence favouring the threshold GARCH model indicates threshold nonlinearity or volatility asymmetry. Simulation experiments demonstrate that our method works very well in distinguishing GARCH and threshold GARCH models. Sensitivity analysis shows that our method is robust to misspecification in error distribution. In the application to 10 market indexes, clear evidence of threshold nonlinearity is discovered and thus supporting volatility asymmetry. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

11.
This paper presents gamma stochastic volatility models and investigates its distributional and time series properties. The parameter estimators obtained by the method of moments are shown analytically to be consistent and asymptotically normal. The simulation results indicate that the estimators behave well. The in‐sample analysis shows that return models with gamma autoregressive stochastic volatility processes capture the leptokurtic nature of return distributions and the slowly decaying autocorrelation functions of squared stock index returns for the USA and UK. In comparison with GARCH and EGARCH models, the gamma autoregressive model picks up the persistence in volatility for the US and UK index returns but not the volatility persistence for the Canadian and Japanese index returns. The out‐of‐sample analysis indicates that the gamma autoregressive model has a superior volatility forecasting performance compared to GARCH and EGARCH models. Copyright © 2006 John Wiley _ Sons, Ltd.  相似文献   

12.
Risk managers are often concerned about tail probabilities of asset return distributions, in particular the frequency and severity of extreme returns. In this article, we propose a model that integrates extreme value theory and point processes to model the frequency and severity of exchange rate returns. The proposed model is applied to daily spot exchange rate series and the parameters of interest, such as the tail index, the mean size and rate of occurrence of extreme returns, are estimated using maximum likelihood estimation. We study the impact of recent currency crises on the frequency and severity of the series and find that, during 1995–9, the frequency of extreme daily Japanese yen–US dollar spot exchange rate returns increases twofold, and the time duration of high volatility persists longer for the Japanese yen series than for the Swiss franc and Danish krone series. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

13.
We investigate the dynamic properties of the realized volatility of five agricultural commodity futures by employing the high‐frequency data from Chinese markets and find that the realized volatility exhibits both long memory and regime switching. To capture these properties simultaneously, we utilize a Markov switching autoregressive fractionally integrated moving average (MS‐ARFIMA) model to forecast the realized volatility by combining the long memory process with regime switching component, and compare its forecast performances with the competing models at various horizons. The full‐sample estimation results show that the dynamics of the realized volatility of agricultural commodity futures are characterized by two levels of long memory: one associated with the low‐volatility regime and the other with the high‐volatility regime, and the probability to stay in the low‐volatility regime is higher than that in the high‐volatility regime. The out‐of‐sample volatility forecast results show that the combination of long memory with switching regimes improves the performance of realized volatility forecast, and the proposed model represents a superior out‐of‐sample realized volatility forecast to the competing models. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
Since volatility is perceived as an explicit measure of risk, financial economists have long been concerned with accurate measures and forecasts of future volatility and, undoubtedly, the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model has been widely used for doing so. It appears, however, from some empirical studies that the GARCH model tends to provide poor volatility forecasts in the presence of additive outliers. To overcome the forecasting limitation, this paper proposes a robust GARCH model (RGARCH) using least absolute deviation estimation and introduces a valuable estimation method from a practical point of view. Extensive Monte Carlo experiments substantiate our conjectures. As the magnitude of the outliers increases, the one‐step‐ahead forecasting performance of the RGARCH model has a more significant improvement in two forecast evaluation criteria over both the standard GARCH and random walk models. Strong evidence in favour of the RGARCH model over other competitive models is based on empirical application. By using a sample of two daily exchange rate series, we find that the out‐of‐sample volatility forecasts of the RGARCH model are apparently superior to those of other competitive models. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

15.
We develop a novel quantile double autoregressive model for modelling financial time series. This is done by specifying a generalized lambda distribution to the quantile function of the location‐scale double autoregressive model developed by Ling (2004, 2007). Parameter estimation uses Markov chain Monte Carlo Bayesian methods. A simulation technique is introduced for forecasting the conditional distribution of financial returns m periods ahead, and hence any for predictive quantities of interest. The application to forecasting value‐at‐risk at different time horizons and coverage probabilities for Dow Jones Industrial Average shows that our method works very well in practice. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
This article introduces a new model to capture simultaneously the mean and variance asymmetries in time series. Threshold non‐linearity is incorporated into the mean and variance specifications of a stochastic volatility model. Bayesian methods are adopted for parameter estimation. Forecasts of volatility and Value‐at‐Risk can also be obtained by sampling from suitable predictive distributions. Simulations demonstrate that the apparent variance asymmetry documented in the literature can be due to the neglect of mean asymmetry. Strong evidence of the mean and variance asymmetries was detected in US and Hong Kong data. Asymmetry in the variance persistence was also discovered in the Hong Kong stock market. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

17.
We propose an economically motivated forecast combination strategy in which model weights are related to portfolio returns obtained by a given forecast model. An empirical application based on an optimal mean–variance bond portfolio problem is used to highlight the advantages of the proposed approach with respect to combination methods based on statistical measures of forecast accuracy. We compute average net excess returns, standard deviation, and the Sharpe ratio of bond portfolios obtained with nine alternative yield curve specifications, as well as with 12 different forecast combination strategies. Return‐based forecast combination schemes clearly outperformed approaches based on statistical measures of forecast accuracy in terms of economic criteria. Moreover, return‐based approaches that dynamically select only the model with highest weight each period and discard all other models delivered even better results, evidencing not only the advantages of trimming forecast combinations but also the ability of the proposed approach to detect best‐performing models. To analyze the robustness of our results, different levels of risk aversion and a different dataset are considered.  相似文献   

18.
Volatility models such as GARCH, although misspecified with respect to the data‐generating process, may well generate volatility forecasts that are unconditionally unbiased. In other words, they generate variance forecasts that, on average, are equal to the integrated variance. However, many applications in finance require a measure of return volatility that is a non‐linear function of the variance of returns, rather than of the variance itself. Even if a volatility model generates forecasts of the integrated variance that are unbiased, non‐linear transformations of these forecasts will be biased estimators of the same non‐linear transformations of the integrated variance because of Jensen's inequality. In this paper, we derive an analytical approximation for the unconditional bias of estimators of non‐linear transformations of the integrated variance. This bias is a function of the volatility of the forecast variance and the volatility of the integrated variance, and depends on the concavity of the non‐linear transformation. In order to estimate the volatility of the unobserved integrated variance, we employ recent results from the realized volatility literature. As an illustration, we estimate the unconditional bias for both in‐sample and out‐of‐sample forecasts of three non‐linear transformations of the integrated standard deviation of returns for three exchange rate return series, where a GARCH(1, 1) model is used to forecast the integrated variance. Our estimation results suggest that, in practice, the bias can be substantial. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
The vector multiplicative error model (vector MEM) is capable of analyzing and forecasting multidimensional non‐negative valued processes. Usually its parameters are estimated by generalized method of moments (GMM) and maximum likelihood (ML) methods. However, the estimations could be heavily affected by outliers. To overcome this problem, in this paper an alternative approach, the weighted empirical likelihood (WEL) method, is proposed. This method uses moment conditions as constraints and the outliers are detected automatically by performing a k‐means clustering on Oja depth values of innovations. The performance of WEL is evaluated against those of GMM and ML methods through extensive simulations, in which three different kinds of additive outliers are considered. Moreover, the robustness of WEL is demonstrated by comparing the volatility forecasts of the three methods on 10‐minute returns of the S&P 500 index. The results from both the simulations and the S&P 500 volatility forecasts have shown preferences in using the WEL method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
Tests of forecast encompassing are used to evaluate one‐step‐ahead forecasts of S&P Composite index returns and volatility. It is found that forecasts over the 1990s made from models that include macroeconomic variables tend to be encompassed by those made from a benchmark model which does not include macroeconomic variables. However, macroeconomic variables are found to add significant information to forecasts of returns and volatility over the 1970s. Often in empirical research on forecasting stock index returns and volatility, in‐sample information criteria are used to rank potential forecasting models. Here, none of the forecasting models for the 1970s that include macroeconomic variables are, on the basis of information criteria, preferred to the relevant benchmark specification. Thus, had investors used information criteria to choose between the models used for forecasting over the 1970s considered in this paper, the predictability that tests of encompassing reveal would not have been exploited. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号