首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
We use real‐time macroeconomic variables and combination forecasts with both time‐varying weights and equal weights to forecast inflation in the USA. The combination forecasts compare three sets of commonly used time‐varying coefficient autoregressive models: Gaussian distributed errors, errors with stochastic volatility, and errors with moving average stochastic volatility. Both point forecasts and density forecasts suggest that models combined by equal weights do not produce worse forecasts than those with time‐varying weights. We also find that variable selection, the allowance of time‐varying lag length choice, and the stochastic volatility specification significantly improve forecast performance over standard benchmarks. Finally, when compared with the Survey of Professional Forecasters, the results of the best combination model are found to be highly competitive during the 2007/08 financial crisis.  相似文献   

2.
    
This paper investigates the forecasting performance of the Garch (1, 1) model when estimated with NINE different error distributions on Standard and Poor's 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of volatility from intra‐day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

3.
    
In this paper, we introduce the functional coefficient to heterogeneous autoregressive realized volatility (HAR‐RV) models to make the parameters change over time. A nonparametric statistic is developed to perform a specification test. The simulation results show that our test displays reliable size and good power. Using the proposed test, we find a significant time variation property of coefficients to the HAR‐RV models. Time‐varying parameter (TVP) models can significantly outperform their constant‐coefficient counterparts for longer forecasting horizons. The predictive ability of TVP models can be improved by accounting for VIX information. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
    
Widely used volatility forecasting methods are usually based on low-frequency time series models. Although some of them employ high-frequency observations, these intraday data are often summarized into low-frequency point statistics, for example, daily realized measures, before being incorporated into a forecasting model. This paper contributes to the volatility forecasting literature by instead predicting the next-period intraday volatility curve via a functional time series forecasting approach. Asymptotic theory related to the estimation of latent volatility curves via functional principal analysis is formally established, laying a solid theoretical foundation of the proposed forecasting method. In contrast with nonfunctional methods, the proposed functional approach fully exploits the rich intraday information and hence leads to more accurate volatility forecasts. This is confirmed by extensive comparisons between the proposed method and those widely used nonfunctional methods in both Monte Carlo simulations and an empirical study on a number of stocks and equity indices from the Chinese market.  相似文献   

5.
    
Volatility plays a key role in asset and portfolio management and derivatives pricing. As such, accurate measures and good forecasts of volatility are crucial for the implementation and evaluation of asset and derivative pricing models in addition to trading and hedging strategies. However, whilst GARCH models are able to capture the observed clustering effect in asset price volatility in‐sample, they appear to provide relatively poor out‐of‐sample forecasts. Recent research has suggested that this relative failure of GARCH models arises not from a failure of the model but a failure to specify correctly the ‘true volatility’ measure against which forecasting performance is measured. It is argued that the standard approach of using ex post daily squared returns as the measure of ‘true volatility’ includes a large noisy component. An alternative measure for ‘true volatility’ has therefore been suggested, based upon the cumulative squared returns from intra‐day data. This paper implements that technique and reports that, in a dataset of 17 daily exchange rate series, the GARCH model outperforms smoothing and moving average techniques which have been previously identified as providing superior volatility forecasts. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

6.
    
This study examines the intraday S&P 500 implied volatility index (VIX) to determine when the index contains the most information for volatility forecasting. The findings indicate that, in general, VIX levels around noon are most informative for predicting realized volatility. We posit that the VIX performs better during this time period because trading motivation around noon is less complex, and therefore trades contain more information on the market expectation of future volatility. Further investigation on the 2008 financial crisis period suggests that market participants become more cautious, and thus the forecasting performance is sustained until the market's close. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Volatility forecasting remains an active area of research with no current consensus as to the model that provides the most accurate forecasts, though Hansen and Lunde (2005) have argued that in the context of daily exchange rate returns nothing can beat a GARCH(1,1) model. This paper extends that line of research by utilizing intra‐day data and obtaining daily volatility forecasts from a range of models based upon the higher‐frequency data. The volatility forecasts are appraised using four different measures of ‘true’ volatility and further evaluated using regression tests of predictive power, forecast encompassing and forecast combination. Our results show that the daily GARCH(1,1) model is largely inferior to all other models, whereas the intra‐day unadjusted‐data GARCH(1,1) model generally provides superior forecasts compared to all other models. Hence, while it appears that a daily GARCH(1,1) model can be beaten in obtaining accurate daily volatility forecasts, an intra‐day GARCH(1,1) model cannot be. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
    
This intention of this paper is to empirically forecast the daily betas of a few European banks by means of four generalized autoregressive conditional heteroscedasticity (GARCH) models and the Kalman filter method during the pre‐global financial crisis period and the crisis period. The four GARCH models employed are BEKK GARCH, DCC GARCH, DCC‐MIDAS GARCH and Gaussian‐copula GARCH. The data consist of daily stock prices from 2001 to 2013 from two large banks each from Austria, Belgium, Greece, Holland, Ireland, Italy, Portugal and Spain. We apply the rolling forecasting method and the model confidence sets (MCS) to compare the daily forecasting ability of the five models during one month of the pre‐crisis (January 2007) and the crisis (January 2013) periods. Based on the MCS results, the BEKK proves the best model in the January 2007 period, and the Kalman filter overly outperforms the other models during the January 2013 period. Results have implications regarding the choice of model during different periods by practitioners and academics. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
    
We propose a quantile regression approach to equity premium forecasting. Robust point forecasts are generated from a set of quantile forecasts using both fixed and time‐varying weighting schemes, thereby exploiting the entire distributional information associated with each predictor. Further gains are achieved by incorporating the forecast combination methodology into our quantile regression setting. Our approach using a time‐varying weighting scheme delivers statistically and economically significant out‐of‐sample forecasts relative to both the historical average benchmark and the combined predictive mean regression modeling approach. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
    
This paper addresses the issue of freight rate risk measurement via value at risk (VaR) and forecast combination methodologies while focusing on detailed performance evaluation. We contribute to the literature in three ways: First, we reevaluate the performance of popular VaR estimation methods on freight rates amid the adverse economic consequences of the recent financial and sovereign debt crisis. Second, we provide a detailed and extensive backtesting and evaluation methodology. Last, we propose a forecast combination approach for estimating VaR. Our findings suggest that our combination methods produce more accurate estimates for all the sectors under scrutiny, while in some cases they may be viewed as conservative since they tend to overestimate nominal VaR.  相似文献   

11.
    
In this paper, we investigate the time series properties of S&P 100 volatility and the forecasting performance of different volatility models. We consider several nonparametric and parametric volatility measures, such as implied, realized and model‐based volatility, and show that these volatility processes exhibit an extremely slow mean‐reverting behavior and possible long memory. For this reason, we explicitly model the near‐unit root behavior of volatility and construct median unbiased forecasts by approximating the finite‐sample forecast distribution using bootstrap methods. Furthermore, we produce prediction intervals for the next‐period implied volatility that provide important information about the uncertainty surrounding the point forecasts. Finally, we apply intercept corrections to forecasts from misspecified models which dramatically improve the accuracy of the volatility forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
We study the performance of recently developed linear regression models for interval data when it comes to forecasting the uncertainty surrounding future stock returns. These interval data models use easy‐to‐compute daily return intervals during the modeling, estimation and forecasting stage. They have to stand up to comparable point‐data models of the well‐known capital asset pricing model type—which employ single daily returns based on successive closing prices and might allow for GARCH effects—in a comprehensive out‐of‐sample forecasting competition. The latter comprises roughly 1000 daily observations on all 30 stocks that constitute the DAX, Germany's main stock index, for a period covering both the calm market phase before and the more turbulent times during the recent financial crisis. The interval data models clearly outperform simple random walk benchmarks as well as the point‐data competitors in the great majority of cases. This result does not only hold when one‐day‐ahead forecasts of the conditional variance are considered, but is even more evident when the focus is on forecasting the width or the exact location of the next day's return interval. Regression models based on interval arithmetic thus prove to be a promising alternative to established point‐data volatility forecasting tools. Copyright ©2015 John Wiley & Sons, Ltd.  相似文献   

13.
    
In a conditional predictive ability test framework, we investigate whether market factors influence the relative conditional predictive ability of realized measures (RMs) and implied volatility (IV), which is able to examine the asynchronism in their forecasting accuracy, and further analyze their unconditional forecasting performance for volatility forecast. Our results show that the asynchronism can be detected significantly and is strongly related to certain market factors, and the comparison between RMs and IV on average forecast performance is more efficient than previous studies. Finally, we use the factors to extend the empirical similarity (ES) approach for combination of forecasts derived from RMs and IV.  相似文献   

14.
    
This study is the first to examine the impacts of overnight and intraday oil futures cross-market information on predicting the US stock market volatility the high-frequency data. In-sample estimations present that high overnight oil futures RV can lead to high RV of the S&P 500. Moreover, negative overnight returns are more powerful than positive components, implying the existence of the leverage effect. From statistical and economic perspectives, out-of-sample results indicate that the decompositions of overnight oil futures and intraday RVs, based on signed intraday returns, can significantly increase the models' predictive ability. Finally, when considering the US stock market overnight effect, the decompositions are still useful to predict volatility, especially during high US stock market fluctuations and high and low EPU states.  相似文献   

15.
    
This paper examines the benefits to forecasters of decomposing close-to-close return volatility into close-to-open (nighttime) and open-to-close (daytime) return volatility. Specifically, we consider whether close-to-close volatility forecasts based on the former type of (temporally aggregated) data are less accurate than corresponding forecasts based on the latter (temporally disaggregated) data. Results obtained from seven different US index futures markets reveal that significant increases in forecast accuracy are possible when using temporally disaggregated volatility data. This result is primarily driven by the fact that forecasts based on such data can be updated as more information becomes available (e.g., information flow from the preceding close-to-open/nighttime trading session). Finally, we demonstrate that the main findings of this paper are robust to the index futures market considered, the way in which return volatility is constructed, and the method used to assess forecast accuracy. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

16.
    
Data are now readily available for a very large number of macroeconomic variables that are potentially useful when forecasting. We argue that recent developments in the theory of dynamic factor models enable such large data sets to be summarized by relatively few estimated factors, which can then be used to improve forecast accuracy. In this paper we construct a large macroeconomic data set for the UK, with about 80 variables, model it using a dynamic factor model, and compare the resulting forecasts with those from a set of standard time‐series models. We find that just six factors are sufficient to explain 50% of the variability of all the variables in the data set. These factors, which can be shown to be related to key variables in the economy, and their use leads to considerable improvements upon standard time‐series benchmarks in terms of forecasting performance. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
    
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock Exchange Composite Index and the five sectoral stock indices in Chinese stock markets for the period 4 January 2000 to 30 December 2011. We then conduct both in‐sample tests and out‐of‐sample forecasts to examine the effects of structural breaks on the performance of ARFIMAX‐FIGARCH models for the realized volatility forecast by utilizing a variety of estimation window sizes designed to accommodate potential structural breaks. The results of the in‐sample tests show that there are multiple breaks in all realized volatility series. The results of the out‐of‐sample point forecasts indicate that the combination forecasts with time‐varying weights across individual forecast models estimated with different estimation windows perform well. In particular, nonlinear combination forecasts with the weights chosen based on a non‐parametric kernel regression and linear combination forecasts with the weights chosen based on the non‐negative restricted least squares and Schwarz information criterion appear to be the most accurate methods in point forecasting for realized volatility under structural breaks. We also conduct an interval forecast of the realized volatility for the combination approaches, and find that the interval forecast for nonlinear combination approaches with the weights chosen according to a non‐parametric kernel regression performs best among the competing models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
    
In this paper we propose Granger (non‐)causality tests based on a VAR model allowing for time‐varying coefficients. The functional form of the time‐varying coefficients is a logistic smooth transition autoregressive (LSTAR) model using time as the transition variable. The model allows for testing Granger non‐causality when the VAR is subject to a smooth break in the coefficients of the Granger causal variables. The proposed test then is applied to the money–output relationship using quarterly US data for the period 1952:2–2002:4. We find that causality from money to output becomes stronger after 1978:4 and the model is shown to have a good out‐of‐sample forecasting performance for output relative to a linear VAR model. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
This paper makes use of simple graphical techniques, a seasonal unit root test and a structural time-series model to obtain information on the time series properties of UK crude steel consumption. It shows that steel consumption has, after the removal of some quite substantial outliers, a fairly constant seasonal pattern, and a well-defined but stochastic business cycle. The long-run movement in steel consumption also appears to be stochastic in nature. These characteristics were used to identify a structural time-series model and the ex-post forecasts obtained from it performed reasonably well. Finally, this paper presents some ex-ante quarterly forecasts for crude steel consumption to the year 1999. © 1997 by John Wiley & Sons, Ltd.  相似文献   

20.
    
Standard statistical loss functions, such as mean‐squared error, are commonly used for evaluating financial volatility forecasts. In this paper, an alternative evaluation framework, based on probability scoring rules that can be more closely tailored to a forecast user's decision problem, is proposed. According to the decision at hand, the user specifies the economic events to be forecast, the scoring rule with which to evaluate these probability forecasts, and the subsets of the forecasts of particular interest. The volatility forecasts from a model are then transformed into probability forecasts of the relevant events and evaluated using the selected scoring rule and calibration tests. An empirical example using exchange rate data illustrates the framework and confirms that the choice of loss function directly affects the forecast evaluation results. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号