首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A widely used approach to evaluating volatility forecasts uses a regression framework which measures the bias and variance of the forecast. We show that the associated test for bias is inappropriate before introducing a more suitable procedure which is based on the test for bias in a conditional mean forecast. Although volatility has been the most common measure of the variability in a financial time series, in many situations confidence interval forecasts are required. We consider the evaluation of interval forecasts and present a regression‐based procedure which uses quantile regression to assess quantile estimator bias and variance. We use exchange rate data to illustrate the proposal by evaluating seven quantile estimators, one of which is a new non‐parametric autoregressive conditional heteroscedasticity quantile estimator. The empirical analysis shows that the new evaluation procedure provides useful insight into the quality of quantile estimators. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

2.
Density forecasts for weather variables are useful for the many industries exposed to weather risk. Weather ensemble predictions are generated from atmospheric models and consist of multiple future scenarios for a weather variable. The distribution of the scenarios can be used as a density forecast, which is needed for pricing weather derivatives. We consider one to 10‐day‐ahead density forecasts provided by temperature ensemble predictions. More specifically, we evaluate forecasts of the mean and quantiles of the density. The mean of the ensemble scenarios is the most accurate forecast for the mean of the density. We use quantile regression to debias the quantiles of the distribution of the ensemble scenarios. The resultant quantile forecasts compare favourably with those from a GARCH model. These results indicate the strong potential for the use of ensemble prediction in temperature density forecasting. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

3.
Using quantile regression this paper explores the predictability of the stock and bond return distributions as a function of economic state variables. The use of quantile regression allows us to examine specific parts of the return distribution such as the tails and the center, and for a sufficiently fine grid of quantiles we can trace out the entire distribution. A univariate quantile regression model is used to examine the marginal stock and bond return distributions, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that economic state variables predict the stock and bond return distributions in quite different ways in terms of, for example, location shifts, volatility and skewness. Comparing the different economic state variables in terms of their out‐of‐sample forecasting performance, the empirical analysis also shows that the relative accuracy of the state variables varies across the return distribution. Density forecasts based on an assumed normal distribution with forecasted mean and variance is compared to forecasts based on quantile estimates and, in general, the latter yields the best performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
The goal of this paper is to use a new modelling approach to extract quantile-based oil and natural gas risk measures using quantile autoregressive distributed lag mixed-frequency data sampling (QADL-MIDAS) regression models. The analysis compares this model to a standard quantile auto-regression (QAR) model and shows that it delivers better quantile forecasts at the majority of forecasting horizons. The analysis also uses the QADL-MIDAS model to construct oil and natural gas prices risk measures proxying for uncertainty, third-moment dynamics, and the risk of extreme energy realizations. The results document that these risk measures are linked to the future evolution of energy prices, while they are linked to the future evolution of US economic growth.  相似文献   

5.
We use real‐time macroeconomic variables and combination forecasts with both time‐varying weights and equal weights to forecast inflation in the USA. The combination forecasts compare three sets of commonly used time‐varying coefficient autoregressive models: Gaussian distributed errors, errors with stochastic volatility, and errors with moving average stochastic volatility. Both point forecasts and density forecasts suggest that models combined by equal weights do not produce worse forecasts than those with time‐varying weights. We also find that variable selection, the allowance of time‐varying lag length choice, and the stochastic volatility specification significantly improve forecast performance over standard benchmarks. Finally, when compared with the Survey of Professional Forecasters, the results of the best combination model are found to be highly competitive during the 2007/08 financial crisis.  相似文献   

6.
We investigate the predictive performance of various classes of value‐at‐risk (VaR) models in several dimensions—unfiltered versus filtered VaR models, parametric versus nonparametric distributions, conventional versus extreme value distributions, and quantile regression versus inverting the conditional distribution function. By using the reality check test of White (2000), we compare the predictive power of alternative VaR models in terms of the empirical coverage probability and the predictive quantile loss for the stock markets of five Asian economies that suffered from the 1997–1998 financial crisis. The results based on these two criteria are largely compatible and indicate some empirical regularities of risk forecasts. The Riskmetrics model behaves reasonably well in tranquil periods, while some extreme value theory (EVT)‐based models do better in the crisis period. Filtering often appears to be useful for some models, particularly for the EVT models, though it could be harmful for some other models. The CaViaR quantile regression models of Engle and Manganelli (2004) have shown some success in predicting the VaR risk measure for various periods, generally more stable than those that invert a distribution function. Overall, the forecasting performance of the VaR models considered varies over the three periods before, during and after the crisis. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
Hidden Markov models are often used to model daily returns and to infer the hidden state of financial markets. Previous studies have found that the estimated models change over time, but the implications of the time‐varying behavior have not been thoroughly examined. This paper presents an adaptive estimation approach that allows for the parameters of the estimated models to be time varying. It is shown that a two‐state Gaussian hidden Markov model with time‐varying parameters is able to reproduce the long memory of squared daily returns that was previously believed to be the most difficult fact to reproduce with a hidden Markov model. Capturing the time‐varying behavior of the parameters also leads to improved one‐step density forecasts. Finally, it is shown that the forecasting performance of the estimated models can be further improved using local smoothing to forecast the parameter variations. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Adaptive exponential smoothing methods allow a smoothing parameter to change over time, in order to adapt to changes in the characteristics of the time series. However, these methods have tended to produce unstable forecasts and have performed poorly in empirical studies. This paper presents a new adaptive method, which enables a smoothing parameter to be modelled as a logistic function of a user‐specified variable. The approach is analogous to that used to model the time‐varying parameter in smooth transition models. Using simulated data, we show that the new approach has the potential to outperform existing adaptive methods and constant parameter methods when the estimation and evaluation samples both contain a level shift or both contain an outlier. An empirical study, using the monthly time series from the M3‐Competition, gave encouraging results for the new approach. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

9.
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock Exchange Composite Index and the five sectoral stock indices in Chinese stock markets for the period 4 January 2000 to 30 December 2011. We then conduct both in‐sample tests and out‐of‐sample forecasts to examine the effects of structural breaks on the performance of ARFIMAX‐FIGARCH models for the realized volatility forecast by utilizing a variety of estimation window sizes designed to accommodate potential structural breaks. The results of the in‐sample tests show that there are multiple breaks in all realized volatility series. The results of the out‐of‐sample point forecasts indicate that the combination forecasts with time‐varying weights across individual forecast models estimated with different estimation windows perform well. In particular, nonlinear combination forecasts with the weights chosen based on a non‐parametric kernel regression and linear combination forecasts with the weights chosen based on the non‐negative restricted least squares and Schwarz information criterion appear to be the most accurate methods in point forecasting for realized volatility under structural breaks. We also conduct an interval forecast of the realized volatility for the combination approaches, and find that the interval forecast for nonlinear combination approaches with the weights chosen according to a non‐parametric kernel regression performs best among the competing models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
This paper examines the relative importance of allowing for time‐varying volatility and country interactions in a forecast model of economic activity. Allowing for these issues is done by augmenting autoregressive models of growth with cross‐country weighted averages of growth and the generalized autoregressive conditional heteroskedasticity framework. The forecasts are evaluated using statistical criteria through point and density forecasts, and an economic criterion based on forecasting recessions. The results show that, compared to an autoregressive model, both components improve forecast ability in terms of point and density forecasts, especially one‐period‐ahead forecasts, but that the forecast ability is not stable over time. The random walk model, however, still dominates in terms of forecasting recessions.  相似文献   

11.
The period of extraordinary volatility in euro area headline inflation starting in 2007 raised the question whether forecast combination methods can be used to hedge against bad forecast performance of single models during such periods and provide more robust forecasts. We investigate this issue for forecasts from a range of short‐term forecasting models. Our analysis shows that there is considerable variation of the relative performance of the different models over time. To take that into account we suggest employing performance‐based forecast combination methods—in particular, one with more weight on the recent forecast performance. We compare such an approach with equal forecast combination that has been found to outperform more sophisticated forecast combination methods in the past, and investigate whether it can improve forecast accuracy over the single best model. The time‐varying weights assign weights to the economic interpretations of the forecast stemming from different models. We also include a number of benchmark models in our analysis. The combination methods are evaluated for HICP headline inflation and HICP excluding food and energy. We investigate how forecast accuracy of the combination methods differs between pre‐crisis times, the period after the global financial crisis and the full evaluation period, including the global financial crisis with its extraordinary volatility in inflation. Overall, we find that forecast combination helps hedge against bad forecast performance and that performance‐based weighting outperforms simple averaging. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

12.
This paper constructs a forecast method that obtains long‐horizon forecasts with improved performance through modification of the direct forecast approach. Direct forecasts are more robust to model misspecification compared to iterated forecasts, which makes them preferable in long horizons. However, direct forecast estimates tend to have jagged shapes across horizons. Our forecast method aims to “smooth out” erratic estimates across horizons while maintaining the robust aspect of direct forecasts through ridge regression, which is a restricted regression on the first differences of regression coefficients. The forecasts are compared to the conventional iterated and direct forecasts in two empirical applications: real oil prices and US macroeconomic series. In both applications, our method shows improvement over direct forecasts.  相似文献   

13.
Combining forecasts, we analyse the role of information flow in computing short‐term forecasts up to one quarter ahead for the euro area GDP and its main components. A dataset of 114 monthly indicators is set up and simple bridge equations are estimated. The individual forecasts are then pooled, using different weighting schemes. To take into consideration the release calendar of each indicator, six forecasts are compiled successively during the quarter. We found that the sequencing of information determines the weight allocated to each block of indicators, especially when the first month of hard data becomes available. This conclusion extends the findings of the recent literature. Moreover, when combining forecasts, two weighting schemes are found to outperform the equal weighting scheme in almost all cases. Compared to an AR forecast, these improve by more than 40% the forecast performance for GDP in the current and next quarter. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
Bayesian methods for assessing the accuracy of dynamic financial value‐at‐risk (VaR) forecasts have not been considered in the literature. Such methods are proposed in this paper. Specifically, Bayes factor analogues of popular frequentist tests for independence of violations from, and for correct coverage of a time series of, dynamic quantile forecasts are developed. To evaluate the relevant marginal likelihoods, analytic integration methods are utilized when possible; otherwise multivariate adaptive quadrature methods are employed to estimate the required quantities. The usual Bayesian interval estimate for a proportion is also examined in this context. The size and power properties of the proposed methods are examined via a simulation study, illustrating favourable comparisons both overall and with their frequentist counterparts. An empirical study employs the proposed methods, in comparison with standard tests, to assess the adequacy of a range of forecasting models for VaR in several financial market data series. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
Forecast combination based on a model selection approach is discussed and evaluated. In addition, a combination approach based on ex ante predictive ability is outlined. The model selection approach which we examine is based on the use of Schwarz (SIC) or the Akaike (AIC) Information Criteria. Monte Carlo experiments based on combination forecasts constructed using possibly (misspecified) models suggest that the SIC offers a potentially useful combination approach, and that further investigation is warranted. For example, combination forecasts from a simple averaging approach MSE‐dominate SIC combination forecasts less than 25% of the time in most cases, while other ‘standard’ combination approaches fare even worse. Alternative combination approaches are also compared by conducting forecasting experiments using nine US macroeconomic variables. In particular, artificial neural networks (ANN), linear models, and professional forecasts are used to form real‐time forecasts of the variables, and it is shown via a series of experiments that SIC, t‐statistic, and averaging combination approaches dominate various other combination approaches. An additional finding is that while ANN models may not MSE‐dominate simpler linear models, combinations of forecasts from these two models outperform either individual forecast, for a subset of the economic variables examined. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

16.
We consider the problem of forecasting a stationary time series when there is an unknown mean break close to the forecast origin. Based on the intercept‐correction methods suggested by Clements and Hendry (1998) and Bewley (2003), a hybrid approach is introduced, where the break and break point are treated in a Bayesian fashion. The hyperparameters of the priors are determined by maximizing the marginal density of the data. The distributions of the proposed forecasts are derived. Different intercept‐correction methods are compared using simulation experiments. Our hybrid approach compares favorably with both the uncorrected and the intercept‐corrected forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
Recent models for credit risk management make use of hidden Markov models (HMMs). HMMs are used to forecast quantiles of corporate default rates. Little research has been done on the quality of such forecasts if the underlying HMM is potentially misspecified. In this paper, we focus on misspecification in the dynamics and dimension of the HMM. We consider both discrete‐ and continuous‐state HMMs. The differences are substantial. Underestimating the number of discrete states has an economically significant impact on forecast quality. Generally speaking, discrete models underestimate the high‐quantile default rate forecasts. Continuous‐state HMMs, however, vastly overestimate high quantiles if the true HMM has a discrete state space. In the reverse setting the biases are much smaller, though still substantial in economic terms. We illustrate the empirical differences using US default data. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
To guarantee stable quantile estimations even for noisy data, a novel loss function and novel quantile estimators are developed, by introducing the effective concept of orthogonal loss considering the noise in both response and explanatory variables. In particular, the pinball loss used in classical quantile estimators is improved into novel orthogonal pinball loss (OPL) by replacing vertical loss by orthogonal loss. Accordingly, linear quantile regression (QR) and support vector machine quantile regression (SVMQR) can be respectively extended into novel OPL‐based QR and OPL‐based SVMQR models. The empirical study on 10 publicly available datasets statistically verifies the superiority of the two OPL‐based models over their respective original forms in terms of prediction accuracy and quantile property, especially for extreme quantiles. Furthermore, the novel OPL‐based SVMQR model with both OPL and artificial intelligence (AI) outperforms all benchmark models, which can be used as a promising quantile estimator, especially for noisy data.  相似文献   

19.
This paper examines whether the disaggregation of consumer sentiment data into its sub‐components improves the real‐time capacity to forecast GDP and consumption. A Bayesian error correction approach augmented with the consumer sentiment index and permutations of the consumer sentiment sub‐indices is used to evaluate forecasting power. The forecasts are benchmarked against both composite forecasts and forecasts from standard error correction models. Using Australian data, we find that consumer sentiment data increase the accuracy of GDP and consumption forecasts, with certain components of consumer sentiment consistently providing better forecasts than aggregate consumer sentiment data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper, we forecast EU area inflation with many predictors using time‐varying parameter models. The facts that time‐varying parameter models are parameter rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time‐varying parameter models. Our approach allows for the coefficient on each predictor to be: (i) time varying; (ii) constant over time; or (iii) shrunk to zero. The econometric methodology decides automatically to which category each coefficient belongs. Our empirical results indicate the benefits of such an approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号