首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a quantile regression approach to equity premium forecasting. Robust point forecasts are generated from a set of quantile forecasts using both fixed and time‐varying weighting schemes, thereby exploiting the entire distributional information associated with each predictor. Further gains are achieved by incorporating the forecast combination methodology into our quantile regression setting. Our approach using a time‐varying weighting scheme delivers statistically and economically significant out‐of‐sample forecasts relative to both the historical average benchmark and the combined predictive mean regression modeling approach. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
A widely used approach to evaluating volatility forecasts uses a regression framework which measures the bias and variance of the forecast. We show that the associated test for bias is inappropriate before introducing a more suitable procedure which is based on the test for bias in a conditional mean forecast. Although volatility has been the most common measure of the variability in a financial time series, in many situations confidence interval forecasts are required. We consider the evaluation of interval forecasts and present a regression‐based procedure which uses quantile regression to assess quantile estimator bias and variance. We use exchange rate data to illustrate the proposal by evaluating seven quantile estimators, one of which is a new non‐parametric autoregressive conditional heteroscedasticity quantile estimator. The empirical analysis shows that the new evaluation procedure provides useful insight into the quality of quantile estimators. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

3.
This paper compares the in‐sample fitting and the out‐of‐sample forecasting performances of four distinct Nelson–Siegel class models: Nelson–Siegel, Bliss, Svensson, and a five‐factor model we propose in order to enhance the fitting flexibility. The introduction of the fifth factor resulted in superior adjustment to the data. For the forecasting exercise the paper contrasts the performances of the term structure models in association with the following econometric methods: quantile autoregression evaluated at the median, VAR, AR, and a random walk. As a pattern, the quantile procedure delivered the best results for longer forecasting horizons. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
Density forecasts for weather variables are useful for the many industries exposed to weather risk. Weather ensemble predictions are generated from atmospheric models and consist of multiple future scenarios for a weather variable. The distribution of the scenarios can be used as a density forecast, which is needed for pricing weather derivatives. We consider one to 10‐day‐ahead density forecasts provided by temperature ensemble predictions. More specifically, we evaluate forecasts of the mean and quantiles of the density. The mean of the ensemble scenarios is the most accurate forecast for the mean of the density. We use quantile regression to debias the quantiles of the distribution of the ensemble scenarios. The resultant quantile forecasts compare favourably with those from a GARCH model. These results indicate the strong potential for the use of ensemble prediction in temperature density forecasting. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

5.
The goal of this paper is to use a new modelling approach to extract quantile-based oil and natural gas risk measures using quantile autoregressive distributed lag mixed-frequency data sampling (QADL-MIDAS) regression models. The analysis compares this model to a standard quantile auto-regression (QAR) model and shows that it delivers better quantile forecasts at the majority of forecasting horizons. The analysis also uses the QADL-MIDAS model to construct oil and natural gas prices risk measures proxying for uncertainty, third-moment dynamics, and the risk of extreme energy realizations. The results document that these risk measures are linked to the future evolution of energy prices, while they are linked to the future evolution of US economic growth.  相似文献   

6.
We investigate the predictive performance of various classes of value‐at‐risk (VaR) models in several dimensions—unfiltered versus filtered VaR models, parametric versus nonparametric distributions, conventional versus extreme value distributions, and quantile regression versus inverting the conditional distribution function. By using the reality check test of White (2000), we compare the predictive power of alternative VaR models in terms of the empirical coverage probability and the predictive quantile loss for the stock markets of five Asian economies that suffered from the 1997–1998 financial crisis. The results based on these two criteria are largely compatible and indicate some empirical regularities of risk forecasts. The Riskmetrics model behaves reasonably well in tranquil periods, while some extreme value theory (EVT)‐based models do better in the crisis period. Filtering often appears to be useful for some models, particularly for the EVT models, though it could be harmful for some other models. The CaViaR quantile regression models of Engle and Manganelli (2004) have shown some success in predicting the VaR risk measure for various periods, generally more stable than those that invert a distribution function. Overall, the forecasting performance of the VaR models considered varies over the three periods before, during and after the crisis. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
Using quantile regression this paper explores the predictability of the stock and bond return distributions as a function of economic state variables. The use of quantile regression allows us to examine specific parts of the return distribution such as the tails and the center, and for a sufficiently fine grid of quantiles we can trace out the entire distribution. A univariate quantile regression model is used to examine the marginal stock and bond return distributions, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that economic state variables predict the stock and bond return distributions in quite different ways in terms of, for example, location shifts, volatility and skewness. Comparing the different economic state variables in terms of their out‐of‐sample forecasting performance, the empirical analysis also shows that the relative accuracy of the state variables varies across the return distribution. Density forecasts based on an assumed normal distribution with forecasted mean and variance is compared to forecasts based on quantile estimates and, in general, the latter yields the best performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
Forecasting for nonlinear time series is an important topic in time series analysis. Existing numerical algorithms for multi‐step‐ahead forecasting ignore accuracy checking, alternative Monte Carlo methods are also computationally very demanding and their accuracy is difficult to control too. In this paper a numerical forecasting procedure for nonlinear autoregressive time series models is proposed. The forecasting procedure can be used to obtain approximate m‐step‐ahead predictive probability density functions, predictive distribution functions, predictive mean and variance, etc. for a range of nonlinear autoregressive time series models. Examples in the paper show that the forecasting procedure works very well both in terms of the accuracy of the results and in the ability to deal with different nonlinear autoregressive time series models. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
This paper investigates whether the forecasting performance of Bayesian autoregressive and vector autoregressive models can be improved by incorporating prior beliefs on the steady state of the time series in the system. Traditional methodology is compared to the new framework—in which a mean‐adjusted form of the models is employed—by estimating the models on Swedish inflation and interest rate data from 1980 to 2004. Results show that the out‐of‐sample forecasting ability of the models is practically unchanged for inflation but significantly improved for the interest rate when informative prior distributions on the steady state are provided. The findings in this paper imply that this new methodology could be useful since it allows us to sharpen our forecasts in the presence of potential pitfalls such as near unit root processes and structural breaks, in particular when relying on small samples. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
This paper proposes value‐at risk (VaR) estimation methods that are a synthesis of conditional autoregressive value at risk (CAViaR) time series models and implied volatility. The appeal of this proposal is that it merges information from the historical time series and the different information supplied by the market's expectation of risk. Forecast‐combining methods, with weights estimated using quantile regression, are considered. We also investigate plugging implied volatility into the CAViaR models—a procedure that has not been considered in the VaR area so far. Results for daily index returns indicate that the newly proposed methods are comparable or superior to individual methods, such as the standard CAViaR models and quantiles constructed from implied volatility and the empirical distribution of standardised residuals. We find that the implied volatility has more explanatory power as the focus moves further out into the left tail of the conditional distribution of S&P 500 daily returns. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

11.
Bayesian methods for assessing the accuracy of dynamic financial value‐at‐risk (VaR) forecasts have not been considered in the literature. Such methods are proposed in this paper. Specifically, Bayes factor analogues of popular frequentist tests for independence of violations from, and for correct coverage of a time series of, dynamic quantile forecasts are developed. To evaluate the relevant marginal likelihoods, analytic integration methods are utilized when possible; otherwise multivariate adaptive quadrature methods are employed to estimate the required quantities. The usual Bayesian interval estimate for a proportion is also examined in this context. The size and power properties of the proposed methods are examined via a simulation study, illustrating favourable comparisons both overall and with their frequentist counterparts. An empirical study employs the proposed methods, in comparison with standard tests, to assess the adequacy of a range of forecasting models for VaR in several financial market data series. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
Value‐at‐risk (VaR) forecasting via a computational Bayesian framework is considered. A range of parametric models is compared, including standard, threshold nonlinear and Markov switching generalized autoregressive conditional heteroskedasticity (GARCH) specifications, plus standard and nonlinear stochastic volatility models, most considering four error probability distributions: Gaussian, Student‐t, skewed‐t and generalized error distribution. Adaptive Markov chain Monte Carlo methods are employed in estimation and forecasting. A portfolio of four Asia–Pacific stock markets is considered. Two forecasting periods are evaluated in light of the recent global financial crisis. Results reveal that: (i) GARCH models outperformed stochastic volatility models in almost all cases; (ii) asymmetric volatility models were clearly favoured pre crisis, while at the 1% level during and post crisis, for a 1‐day horizon, models with skewed‐t errors ranked best, while integrated GARCH models were favoured at the 5% level; (iii) all models forecast VaR less accurately and anti‐conservatively post crisis. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
Upon the evidence that infinite‐order vector autoregression setting is more realistic in time series models, we propose new model selection procedures for producing efficient multistep forecasts. They consist of order selection criteria involving the sample analog of the asymptotic approximation of the h‐step‐ahead forecast mean squared error matrix, where h is the forecast horizon. These criteria are minimized over a truncation order nT under the assumption that an infinite‐order vector autoregression can be approximated, under suitable conditions, with a sequence of truncated models, where nT is increasing with sample size. Using finite‐order vector autoregressive models with various persistent levels and realistic sample sizes, Monte Carlo simulations show that, overall, our criteria outperform conventional competitors. Specifically, they tend to yield better small‐sample distribution of the lag‐order estimates around the true value, while estimating it with relatively satisfactory probabilities. They also produce more efficient multistep (and even stepwise) forecasts since they yield the lowest h‐step‐ahead forecast mean squared errors for the individual components of the holding pseudo‐data to forecast. Thus estimating the actual autoregressive order as well as the best forecasting model can be achieved with the same selection procedure. Such results stand in sharp contrast to the belief that parsimony is a virtue in itself, and state that the relative accuracy of strongly consistent criteria such as the Schwarz information criterion, as claimed in the literature, is overstated. Our criteria are new tools extending those previously existing in the literature and hence can suitably be used for various practical situations when necessary. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
This paper investigates Bayesian forecasts for some cointegrated time series data. Suppose data are derived from some cointegrated model, but, an unrestricted vector autoregressive model, without including cointegrated conditions, is fitted; the implication of using an incorrect model will be investigated from the Bayesian forecasting viewpoint. For some special cointegrated data and under the diffuse prior assumption, it can be analytically proven that the posterior predictive distributions for both the true model and the fitted model are asymptotically the same for any future step. For a more general cointegrated model, examinations are performed via simulations. Some simulated results reveal that a reasonably unrestricted model will still provide a rather accurate forecast as long as the sample size is large enough or the forecasting period is not too far in the future. For a small sample size or for long‐term forecasting, more accurate forecasts are expected if the correct cointegrated model is actually applied. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

15.
Recently, analysts' cash flow forecasts have become widely available through financial information services. Cash flow information enables practitioners to better understand the real operating performance and financial stability of a company, particularly when earnings information is noisy and of low quality. However, research suggests that analysts' cash flow forecasts are less accurate and more dispersed than earnings forecasts. We thus investigate factors influencing cash flow forecast accuracy and build a practical model to distinguish more accurate from less accurate cash flow forecasters, using past cash flow forecast accuracy and analyst characteristics. We find significant power in our cash flow forecast accuracy prediction models. We also find that analysts develop cash flow‐specific forecasting expertise and knowhow, which are distinct from those that analysts acquire from forecasting earnings. In particular, cash flow‐specific information is more useful in identifying accurate cash flow forecasters than earnings‐specific information.Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
This paper presents the writer's experience, over a period of 25 years, in analysing organizational systems and, in particular, concentrates on the overall forecasting activity. The paper first looks at the relationship between forecasting and decision taking–with emphasis on the fact that forecasting is a means to aid decision taking and not an end in itself. It states that there are many types of forecasting problems, each requiring different methods of treatment. The paper then discusses attitudes which are emerging about the relative advantages of different forecasting techniques. It suggests a model building process which requires‘experience’and‘craftsmanship’, extensive practical application, frequent interaction between theory and practice and a methodology that eventually leads to models that contain no detectable inadequacies. Furthermore, it argues that although models which forecast a time series from its past history have a very important role to play, for effective policy making it is necessary to augment the model by introducing policy variables, again in a systematic not an ‘ad hoc’ manner. Finally, the paper discusses how forecasting systems can be introduced into the management process in the first place and how they should be monitored and updated when found wanting.  相似文献   

17.
This paper investigates the forecasting performance of the Garch (1, 1) model when estimated with NINE different error distributions on Standard and Poor's 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of volatility from intra‐day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

18.
This paper investigates inference and volatility forecasting using a Markov switching heteroscedastic model with a fat‐tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. The motivation for extending the Markov switching GARCH model, previously developed to capture mean asymmetry, is that the switching variable, assumed to be a first‐order Markov process, is unobserved. The proposed model extends this work to incorporate Markov switching in the mean and variance simultaneously. Parameter estimation and inference are performed in a Bayesian framework via a Markov chain Monte Carlo scheme. We compare competing models using Bayesian forecasting in a comparative value‐at‐risk study. The proposed methods are illustrated using both simulations and eight international stock market return series. The results generally favor the proposed double Markov switching GARCH model with an exogenous variable. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
This paper proposes an adjustment of linear autoregressive conditional mean forecasts that exploits the predictive content of uncorrelated model residuals. The adjustment is motivated by non‐Gaussian characteristics of model residuals, and implemented in a semiparametric fashion by means of conditional moments of simulated bivariate distributions. A pseudo ex ante forecasting comparison is conducted for a set of 494 macroeconomic time series recently collected by Dees et al. (Journal of Applied Econometrics 2007; 22: 1–38). In total, 10,374 time series realizations are contrasted against competing short‐, medium‐ and longer‐term purely autoregressive and adjusted predictors. With regard to all forecast horizons, the adjusted predictions consistently outperform conditionally Gaussian forecasts according to cross‐sectional mean group evaluation of absolute forecast errors and directional accuracy. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
Several studies have tested for long‐range dependence in macroeconomic and financial time series but very few have assessed the usefulness of long‐memory models as forecast‐generating mechanisms. This study tests for fractional differencing in the US monetary indices (simple sum and divisia) and compares the out‐of‐sample fractional forecasts to benchmark forecasts. The long‐memory parameter is estimated using Robinson's Gaussian semi‐parametric and multivariate log‐periodogram methods. The evidence amply suggests that the monetary series possess a fractional order between one and two. Fractional out‐of‐sample forecasts are consistently more accurate (with the exception of the M3 series) than benchmark autoregressive forecasts but the forecasting gains are not generally statistically significant. In terms of forecast encompassing, the fractional model encompasses the autoregressive model for the divisia series but neither model encompasses the other for the simple sum series. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号