首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 593 毫秒
1.
This paper investigates inference and volatility forecasting using a Markov switching heteroscedastic model with a fat‐tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. The motivation for extending the Markov switching GARCH model, previously developed to capture mean asymmetry, is that the switching variable, assumed to be a first‐order Markov process, is unobserved. The proposed model extends this work to incorporate Markov switching in the mean and variance simultaneously. Parameter estimation and inference are performed in a Bayesian framework via a Markov chain Monte Carlo scheme. We compare competing models using Bayesian forecasting in a comparative value‐at‐risk study. The proposed methods are illustrated using both simulations and eight international stock market return series. The results generally favor the proposed double Markov switching GARCH model with an exogenous variable. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
This paper evaluates the performance of conditional variance models using high‐frequency data of the National Stock Index (S&P CNX NIFTY) and attempts to determine the optimal sampling frequency for the best daily volatility forecast. A linear combination of the realized volatilities calculated at two different frequencies is used as benchmark to evaluate the volatility forecasting ability of the conditional variance models (GARCH (1, 1)) at different sampling frequencies. From the analysis, it is found that sampling at 30 minutes gives the best forecast for daily volatility. The forecasting ability of these models is deteriorated, however, by the non‐normal property of mean adjusted returns, which is an assumption in conditional variance models. Nevertheless, the optimum frequency remained the same even in the case of different models (EGARCH and PARCH) and different error distribution (generalized error distribution, GED) where the error is reduced to a certain extent by incorporating the asymmetric effect on volatility. Our analysis also suggests that GARCH models with GED innovations or EGRACH and PARCH models would give better estimates of volatility with lower forecast error estimates. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
In recent years there has been a considerable development in modelling non‐linearities and asymmetries in economic and financial variables. The aim of the current paper is to compare the forecasting performance of different models for the returns of three of the most traded exchange rates in terms of the US dollar, namely the French franc (FF/$), the German mark (DM/$) and the Japanese yen (Y/$). The relative performance of non‐linear models of the SETAR, STAR and GARCH types is contrasted with their linear counterparts. The results show that if attention is restricted to mean square forecast errors, the performance of the models, when distinguishable, tends to favour the linear models. The forecast performance of the models is evaluated also conditional on the regime at the forecast origin and on density forecasts. This analysis produces more evidence of forecasting gains from non‐linear models. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
The increase in oil price volatility in recent years has raised the importance of forecasting it accurately for valuing and hedging investments. The paper models and forecasts the crude oil exchange‐traded funds (ETF) volatility index, which has been used in the last years as an important alternative measure to track and analyze the volatility of future oil prices. Analysis of the oil volatility index suggests that it presents features similar to those of the daily market volatility index, such as long memory, which is modeled using well‐known heterogeneous autoregressive (HAR) specifications and new extensions that are based on net and scaled measures of oil price changes. The aim is to improve the forecasting performance of the traditional HAR models by including predictors that capture the impact of oil price changes on the economy. The performance of the new proposals and benchmarks is evaluated with the model confidence set (MCS) and the Generalized‐AutoContouR (G‐ACR) tests in terms of point forecasts and density forecasting, respectively. We find that including the leverage in the conditional mean or variance of the basic HAR model increases its predictive ability. Furthermore, when considering density forecasting, the best models are a conditional heteroskedastic HAR model that includes a scaled measure of oil price changes, and a HAR model with errors following an exponential generalized autoregressive conditional heteroskedasticity specification. In both cases, we consider a flexible distribution for the errors of the conditional heteroskedastic process.  相似文献   

5.
This study compares the volatility and density prediction performance of alternative GARCH models with different conditional distribution specifications. The conditional residuals are specified as normal, skewedHyphen;t or compound Poisson (jump) distribution based upon a nonlinear and asymmetric GARCH (NGARCH) model framework. The empirical results for the S&P 500 and FTSE 100 index returns suggest that the jump model outperforms all other models in terms of both volatility forecasting and density prediction. Nevertheless, the superiority of the nonHyphen;normal models is not always significant and diminished during the sample period on those occasions when volatility experiences an obvious structural change. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
Forecasting for nonlinear time series is an important topic in time series analysis. Existing numerical algorithms for multi‐step‐ahead forecasting ignore accuracy checking, alternative Monte Carlo methods are also computationally very demanding and their accuracy is difficult to control too. In this paper a numerical forecasting procedure for nonlinear autoregressive time series models is proposed. The forecasting procedure can be used to obtain approximate m‐step‐ahead predictive probability density functions, predictive distribution functions, predictive mean and variance, etc. for a range of nonlinear autoregressive time series models. Examples in the paper show that the forecasting procedure works very well both in terms of the accuracy of the results and in the ability to deal with different nonlinear autoregressive time series models. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

7.
This article introduces a new model to capture simultaneously the mean and variance asymmetries in time series. Threshold non‐linearity is incorporated into the mean and variance specifications of a stochastic volatility model. Bayesian methods are adopted for parameter estimation. Forecasts of volatility and Value‐at‐Risk can also be obtained by sampling from suitable predictive distributions. Simulations demonstrate that the apparent variance asymmetry documented in the literature can be due to the neglect of mean asymmetry. Strong evidence of the mean and variance asymmetries was detected in US and Hong Kong data. Asymmetry in the variance persistence was also discovered in the Hong Kong stock market. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

8.
We study the performance of recently developed linear regression models for interval data when it comes to forecasting the uncertainty surrounding future stock returns. These interval data models use easy‐to‐compute daily return intervals during the modeling, estimation and forecasting stage. They have to stand up to comparable point‐data models of the well‐known capital asset pricing model type—which employ single daily returns based on successive closing prices and might allow for GARCH effects—in a comprehensive out‐of‐sample forecasting competition. The latter comprises roughly 1000 daily observations on all 30 stocks that constitute the DAX, Germany's main stock index, for a period covering both the calm market phase before and the more turbulent times during the recent financial crisis. The interval data models clearly outperform simple random walk benchmarks as well as the point‐data competitors in the great majority of cases. This result does not only hold when one‐day‐ahead forecasts of the conditional variance are considered, but is even more evident when the focus is on forecasting the width or the exact location of the next day's return interval. Regression models based on interval arithmetic thus prove to be a promising alternative to established point‐data volatility forecasting tools. Copyright ©2015 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we compare several multi‐period volatility forecasting models, specifically from MIDAS and HAR families. We perform our comparisons in terms of out‐of‐sample volatility forecasting accuracy. We also consider combinations of the models' forecasts. Using intra‐daily returns of the BOVESPA index, we calculate volatility measures such as realized variance, realized power variation and realized bipower variation to be used as regressors in both models. Further, we use a nonparametric procedure for separately measuring the continuous sample path variation and the discontinuous jump part of the quadratic variation process. Thus MIDAS and HAR specifications with the continuous sample path and jump variability measures as separate regressors are estimated. Our results in terms of mean squared error suggest that regressors involving volatility measures which are robust to jumps (i.e. realized bipower variation and realized power variation) are better at forecasting future volatility. However, we find that, in general, the forecasts based on these regressors are not statistically different from those based on realized variance (the benchmark regressor). Moreover, we find that, in general, the relative forecasting performances of the three approaches (i.e. MIDAS, HAR and forecast combinations) are statistically equivalent. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper we forecast daily returns of crypto‐currencies using a wide variety of different econometric models. To capture salient features commonly observed in financial time series like rapid changes in the conditional variance, non‐normality of the measurement errors and sharply increasing trends, we develop a time‐varying parameter VAR with t‐distributed measurement errors and stochastic volatility. To control for overparametrization, we rely on the Bayesian literature on shrinkage priors, which enables us to shrink coefficients associated with irrelevant predictors and/or perform model specification in a flexible manner. Using around one year of daily data, we perform a real‐time forecasting exercise and investigate whether any of the proposed models is able to outperform the naive random walk benchmark. To assess the economic relevance of the forecasting gains produced by the proposed models we, moreover, run a simple trading exercise.  相似文献   

11.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

12.
The directional news impact curve (DNIC) is a relationship between returns and the probability of next period's return exceeding a certain threshold—zero in particular. Using long series of S&P500 index returns and a number of parametric models suggested in the literature, as well and flexible semiparametric models, we investigate the shape of the DNIC and forecasting abilities of these models. The semiparametric approach reveals that the DNIC has complicated shapes characterized by nonsymmetry with respect to past returns and their signs, heterogeneity across the thresholds, and changes over time. Simple parametric models often miss some important features of the DNIC, but some nevertheless exhibit superior out‐of‐sample performance.  相似文献   

13.
This paper proposes a parsimonious threshold stochastic volatility (SV) model for financial asset returns. Instead of imposing a threshold value on the dynamics of the latent volatility process of the SV model, we assume that the innovation of the mean equation follows a threshold distribution in which the mean innovation switches between two regimes. In our model, the threshold is treated as an unknown parameter. We show that the proposed threshold SV model can not only capture the time‐varying volatility of returns, but can also accommodate the asymmetric shape of conditional distribution of the returns. Parameter estimation is carried out by using Markov chain Monte Carlo methods. For model selection and volatility forecast, an auxiliary particle filter technique is employed to approximate the filter and prediction distributions of the returns. Several experiments are conducted to assess the robustness of the proposed model and estimation methods. In the empirical study, we apply our threshold SV model to three return time series. The empirical analysis results show that the threshold parameter has a non‐zero value and the mean innovations belong to two separately distinct regimes. We also find that the model with an unknown threshold parameter value consistently outperforms the model with a known threshold parameter value. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
An implied assumption in the asymmetric conditional autoregressive range (ACARR) model is that upward range is independent of downward range. This paper scrutinizes this assumption on a broad variety of stock indices. Instead of independence, we find significant cross‐interdependence between the upward range and the downward range. Regression test shows that the cross‐interdependence cannot be explained by leverage effect. To include the cross‐interdependence, a feedback asymmetric conditional autoregressive range (FACARR) model is proposed. Empirical studies are performed on a variety of stock indices, and the results show that the FACARR model outperforms the ACARR model with high significance for both in‐sample and out‐of‐sample forecasting.  相似文献   

15.
The variance of a portfolio can be forecast using a single index model or the covariance matrix of the portfolio. Using univariate and multivariate conditional volatility models, this paper evaluates the performance of the single index and portfolio models in forecasting value‐at‐risk (VaR) thresholds of a portfolio. Likelihood ratio tests of unconditional coverage, independence and conditional coverage of the VaR forecasts suggest that the single‐index model leads to excessive and often serially dependent violations, while the portfolio model leads to too few violations. The single‐index model also leads to lower daily Basel Accord capital charges. The univariate models which display correct conditional coverage lead to higher capital charges than models which lead to too many violations. Overall, the Basel Accord penalties appear to be too lenient and favour models which have too many violations. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

16.
We develop a novel quantile double autoregressive model for modelling financial time series. This is done by specifying a generalized lambda distribution to the quantile function of the location‐scale double autoregressive model developed by Ling (2004, 2007). Parameter estimation uses Markov chain Monte Carlo Bayesian methods. A simulation technique is introduced for forecasting the conditional distribution of financial returns m periods ahead, and hence any for predictive quantities of interest. The application to forecasting value‐at‐risk at different time horizons and coverage probabilities for Dow Jones Industrial Average shows that our method works very well in practice. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
The availability of numerous modeling approaches for volatility forecasting leads to model uncertainty for both researchers and practitioners. A large number of studies provide evidence in favor of combination methods for forecasting a variety of financial variables, but most of them are implemented on returns forecasting and evaluate their performance based solely on statistical evaluation criteria. In this paper, we combine various volatility forecasts based on different combination schemes and evaluate their performance in forecasting the volatility of the S&P 500 index. We use an exhaustive variety of combination methods to forecast volatility, ranging from simple techniques to time-varying techniques based on the past performance of the single models and regression techniques. We then evaluate the forecasting performance of single and combination volatility forecasts based on both statistical and economic loss functions. The empirical analysis in this paper yields an important conclusion. Although combination forecasts based on more complex methods perform better than the simple combinations and single models, there is no dominant combination technique that outperforms the rest in both statistical and economic terms.  相似文献   

18.
A large literature has investigated predictability of the conditional mean of low‐frequency stock returns by macroeconomic and financial variables; however, little is known about predictability of the conditional distribution. We look at one‐step‐ahead out‐of‐sample predictability of the conditional distribution of monthly US stock returns in relation to the macroeconomic and financial environment. Our methodological approach is innovative: we consider several specifications for the conditional density and combinations schemes. Our results are as follows: the entire density is predicted under combination schemes as applied to univariate GARCH models with Gaussian innovations; the Bayesian winner in relation to GARCH‐skewed‐t models is informative about the 5% value at risk; the average realised utility of a mean–variance investor is maximised under the Bayesian winner as applied to GARCH models with symmetric Student t innovations. Our results have two implications: the best prediction model depends on the evaluation criterion; and combination schemes outperform individual models. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
Financial data often take the form of a collection of curves that can be observed sequentially over time; for example, intraday stock price curves and intraday volatility curves. These curves can be viewed as a time series of functions that can be observed on equally spaced and dense grids. Owing to the so‐called curse of dimensionality, the nature of high‐dimensional data poses challenges from a statistical perspective; however, it also provides opportunities to analyze a rich source of information, so that the dynamic changes of short time intervals can be better understood. In this paper, we consider forecasting a time series of functions and propose a number of statistical methods that can be used to forecast 1‐day‐ahead intraday stock returns. As we sequentially observe new data, we also consider the use of dynamic updating in updating point and interval forecasts for achieving improved accuracy. The forecasting methods were validated through an empirical study of 5‐minute intraday S&P 500 index returns.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号