首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
Data are now readily available for a very large number of macroeconomic variables that are potentially useful when forecasting. We argue that recent developments in the theory of dynamic factor models enable such large data sets to be summarized by relatively few estimated factors, which can then be used to improve forecast accuracy. In this paper we construct a large macroeconomic data set for the UK, with about 80 variables, model it using a dynamic factor model, and compare the resulting forecasts with those from a set of standard time‐series models. We find that just six factors are sufficient to explain 50% of the variability of all the variables in the data set. These factors, which can be shown to be related to key variables in the economy, and their use leads to considerable improvements upon standard time‐series benchmarks in terms of forecasting performance. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
In order to provide short‐run forecasts of headline and core HICP inflation for France, we assess the forecasting performance of a large set of economic indicators, individually and jointly, as well as using dynamic factor models. We run out‐of‐sample forecasts implementing the Stock and Watson (1999) methodology. We find that, according to usual statistical criteria, the combination of several indicators—in particular those derived from surveys—provides better results than factor models, even after pre‐selection of the variables included in the panel. However, factors included in VAR models exhibit more stable forecasting performance over time. Results for the HICP excluding unprocessed food and energy are very encouraging. Moreover, we show that the aggregation of forecasts on subcomponents exhibits the best performance for projecting total inflation and that it is robust to data snooping. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
This paper uses an extension of the Euro‐Sting single‐index dynamic factor model to construct short‐term forecasts of quarterly GDP growth for the euro area by accounting for financial variables as leading indicators. From a simulated real‐time exercise, the model is used to investigate the forecasting accuracy across the different phases of the business cycle. Our extension is also used to evaluate the relative forecasting ability of the two most reliable business cycle surveys for the euro area: the PMI and the ESI. We show that the latter produces more accurate GDP forecasts than the former. Finally, the proposed model is also characterized by its great ability to capture the European business cycle, as well as the probabilities of expansion and/or contraction periods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
This paper focuses on the effects of disaggregation on forecast accuracy for nonstationary time series using dynamic factor models. We compare the forecasts obtained directly from the aggregated series based on its univariate model with the aggregation of the forecasts obtained for each component of the aggregate. Within this framework (first obtain the forecasts for the component series and then aggregate the forecasts), we try two different approaches: (i) generate forecasts from the multivariate dynamic factor model and (ii) generate the forecasts from univariate models for each component of the aggregate. In this regard, we provide analytical conditions for the equality of forecasts. The results are applied to quarterly gross domestic product (GDP) data of several European countries of the euro area and to their aggregated GDP. This will be compared to the prediction obtained directly from modeling and forecasting the aggregate GDP of these European countries. In particular, we would like to check whether long‐run relationships between the levels of the components are useful for improving the forecasting accuracy of the aggregate growth rate. We will make forecasts at the country level and then pool them to obtain the forecast of the aggregate. The empirical analysis suggests that forecasts built by aggregating the country‐specific models are more accurate than forecasts constructed using the aggregated data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
We use state space methods to estimate a large dynamic factor model for the Norwegian economy involving 93 variables for 1978Q2–2005Q4. The model is used to obtain forecasts for 22 key variables that can be derived from the original variables by aggregation. To investigate the potential gain in using such a large information set, we compare the forecasting properties of the dynamic factor model with those of univariate benchmark models. We find that there is an overall gain in using the dynamic factor model, but that the gain is notable only for a few of the key variables. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
This paper discusses the forecasting performance of alternative factor models based on a large panel of quarterly time series for the German economy. One model extracts factors by static principal components analysis; the second model is based on dynamic principal components obtained using frequency domain methods; the third model is based on subspace algorithms for state‐space models. Out‐of‐sample forecasts show that the forecast errors of the factor models are on average smaller than the errors of a simple autoregressive benchmark model. Among the factor models, the dynamic principal component model and the subspace factor model outperform the static factor model in most cases in terms of mean‐squared forecast error. However, the forecast performance depends crucially on the choice of appropriate information criteria for the auxiliary parameters of the models. In the case of misspecification, rankings of forecast performance can change severely. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

9.
A short‐term mixed‐frequency model is proposed to estimate and forecast Italian economic activity fortnightly. We introduce a dynamic one‐factor model with three frequencies (quarterly, monthly, and fortnightly) by selecting indicators that show significant coincident and leading properties and are representative of both demand and supply. We conduct an out‐of‐sample forecasting exercise and compare the prediction errors of our model with those of alternative models that do not include fortnightly indicators. We find that high‐frequency indicators significantly improve the real‐time forecasts of Italian gross domestic product (GDP); this result suggests that models exploiting the information available at different lags and frequencies provide forecasting gains beyond those based on monthly variables alone. Moreover, the model provides a new fortnightly indicator of GDP, consistent with the official quarterly series.  相似文献   

10.
This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short‐term interest rates from October 2008. Out‐of‐sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson–Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium‐ to longer‐term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near‐zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson–Siegel models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
The use of expert judgement is an important part of demographic forecasting. However, because judgement enters into the forecasting process in an informal way, it has been very difficult to assess its role relative to the analysis of past data. The use of targets in demographic forecasts permits us to embed the subjective forecasting process into a simple time-series regression model, in which expert judgement is incorporated via mixed estimation. The strength of expert judgement is denned, and estimated using the official forecasts of cause-specific mortality in the United States. We show that the weight given to judgement varies in an improbable manner by age. Overall, the weight given to judgement appears too high. An alternative approach to combining expert judgement and past data is suggested.  相似文献   

12.
The simplicity of the standard diffusion index model of Stock and Watson has certainly contributed to its success among practitioners, resulting in a growing body of literature on factor‐augmented forecasts. However, as pointed out by Bai and Ng, the ranked factors considered in the forecasting equation depend neither on the variable to be forecast nor on the forecasting horizon. We propose a refinement of the standard approach that retains the computational simplicity while coping with this limitation. Our approach consists of generating a weighted average of all the principal components, the weights depending both on the eigenvalues of the sample correlation matrix and on the covariance between the estimated factor and the targeted variable at the relevant horizon. This ‘targeted diffusion index’ approach is applied to US data and the results show that it outperforms considerably the standard approach in forecasting several major macroeconomic series. Moreover, the improvement is more significant in the final part of the forecasting evaluation period. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
Using the generalized dynamic factor model, this study constructs three predictors of crude oil price volatility: a fundamental (physical) predictor, a financial predictor, and a macroeconomic uncertainty predictor. Moreover, an event‐triggered predictor is constructed using data extracted from Google Trends. We construct GARCH‐MIDAS (generalized autoregressive conditional heteroskedasticity–mixed‐data sampling) models combining realized volatility with the predictors to predict oil price volatility at different forecasting horizons. We then identify the predictive power of the realized volatility and the predictors by the model confidence set (MCS) test. The findings show that, among the four indexes, the financial predictor has the most predictive power for crude oil volatility, which provides strong evidence that financialization has been the key determinant of crude oil price behavior since the 2008 global financial crisis. In addition, the fundamental predictor, followed by the financial predictor, effectively forecasts crude oil price volatility in the long‐run forecasting horizons. Our findings indicate that the different predictors can provide distinct predictive information at the different horizons given the specific market situation. These findings have useful implications for market traders in terms of managing crude oil price risk.  相似文献   

14.
Reliable correlation forecasts are of paramount importance in modern risk management systems. A plethora of correlation forecasting models have been proposed in the open literature, yet their impact on the accuracy of value‐at‐risk calculations has not been explicitly investigated. In this paper, traditional and modern correlation forecasting techniques are compared using standard statistical and risk management loss functions. Three portfolios consisting of stocks, bonds and currencies are considered. We find that GARCH models can better account for the correlation's dynamic structure in the stock and bond portfolios. On the other hand, simpler specifications such as the historical mean model or simple moving average models are better suited for the currency portfolio. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
We investigate the forecasting ability of the most commonly used benchmarks in financial economics. We approach the usual caveats of probabilistic forecasts studies—small samples, limited models, and nonholistic validations—by performing a comprehensive comparison of 15 predictive schemes during a time period of over 21 years. All densities are evaluated in terms of their statistical consistency, local accuracy and forecasting errors. Using a new composite indicator, the integrated forecast score, we show that risk‐neutral densities outperform historical‐based predictions in terms of information content. We find that the variance gamma model generates the highest out‐of‐sample likelihood of observed prices and the lowest predictive errors, whereas the GARCH‐based GJR‐FHS delivers the most consistent forecasts across the entire density range. In contrast, lognormal densities, the Heston model, or the nonparametric Breeden–Litzenberger formula yield biased predictions and are rejected in statistical tests.  相似文献   

16.
This paper presents an autoregressive fractionally integrated moving‐average (ARFIMA) model of nominal exchange rates and compares its forecasting capability with the monetary structural models and the random walk model. Monthly observations are used for Canada, France, Germany, Italy, Japan and the United Kingdom for the period of April 1973 through December 1998. The estimation method is Sowell's (1992) exact maximum likelihood estimation. The forecasting accuracy of the long‐memory model is formally compared to the random walk and the monetary models, using the recently developed Harvey, Leybourne and Newbold (1997) test statistics. The results show that the long‐memory model is more efficient than the random walk model in steps‐ahead forecasts beyond 1 month for most currencies and more efficient than the monetary models in multi‐step‐ahead forecasts. This new finding strongly suggests that the long‐memory model of nominal exchange rates be studied as a viable alternative to the conventional models. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

18.
This paper develops a dynamic factor model that uses euro area country-specific information on output and inflation to estimate an area-wide measure of the output gap. Our model assumes that output and inflation can be decomposed into country-specific stochastic trends and a common cyclical component. Comovement in the trends is introduced by imposing a factor structure on the shocks to the latent states. We moreover introduce flexible stochastic volatility specifications to control for heteroscedasticity in the measurement errors and innovations to the latent states. Carefully specified shrinkage priors allow for pushing the model towards a homoscedastic specification, if supported by the data. Our measure of the output gap closely tracks other commonly adopted measures, with small differences in magnitudes and timing. To assess whether the model-based output gap helps in forecasting inflation, we perform an out-of-sample forecasting exercise. The findings indicate that our approach yields superior inflation forecasts, both in terms of point and density predictions.  相似文献   

19.
This paper describes procedures for forecasting countries' output growth rates and medians of a set of output growth rates using Hierarchical Bayesian (HB) models. The purpose of this paper is to show how the γ‐shrinkage forecast of Zellner and Hong ( 1989 ) emerges from a hierarchical Bayesian model and to describe how the Gibbs sampler can be used to fit this model to yield possibly improved output growth rate and median output growth rate forecasts. The procedures described in this paper offer two primary methodological contributions to previous work on this topic: (1) the weights associated with widely‐used shrinkage forecasts are determined endogenously, and (2) the posterior predictive density of the future median output growth rate is obtained numerically from which optimal point and interval forecasts are calculated. Using IMF data, we find that the HB median output growth rate forecasts outperform forecasts obtained from variety of benchmark models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

20.
The leverage effect—the correlation between an asset's return and its volatility—has played a key role in forecasting and understanding volatility and risk. While it is a long standing consensus that leverage effects exist and improve forecasts, empirical evidence puzzlingly does not show that this effect exists for many individual stocks, mischaracterizing risk, and therefore leading to poor predictive performance. We examine this puzzle, with the goal to improve density forecasts, by relaxing the assumption of linearity of the leverage effect. Nonlinear generalizations of the leverage effect are proposed within the Bayesian stochastic volatility framework in order to capture flexible leverage structures. Efficient Bayesian sequential computation is developed and implemented to estimate this effect in a practical, on-line manner. Examining 615 stocks that comprise the S&P500 and Nikkei 225, we find that our proposed nonlinear leverage effect model improves predictive performances for 89% of all stocks compared to the conventional stochastic volatility model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号