首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
This paper uses a meta‐analysis to survey existing factor forecast applications for output and inflation and assesses what causes large factor models to perform better or more poorly at forecasting than other models. Our results suggest that factor models tend to outperform small models, whereas factor forecasts are slightly worse than pooled forecasts. Factor models deliver better predictions for US variables than for UK variables, for US output than for euro‐area output and for euro‐area inflation than for US inflation. The size of the dataset from which factors are extracted positively affects the relative factor forecast performance, whereas pre‐selecting the variables included in the dataset did not improve factor forecasts in the past. Finally, the factor estimation technique may matter as well. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
This study investigates whether human judgement can be of value to users of industrial learning curves, either alone or in conjunction with statistical models. In a laboratory setting, it compares the forecast accuracy of a statistical model and judgemental forecasts, contingent on three factors: the amount of data available prior to forecasting, the forecasting horizon, and the availability of a decision aid (projections from a fitted learning curve). The results indicate that human judgement was better than the curve forecasts overall. Despite their lack of field experience with learning curve use, 52 of the 79 subjects outperformed the curve on the set of 120 forecasts, based on mean absolute percentage error. Human performance was statistically superior to the model when few data points were available and when forecasting further into the future. These results indicate substantial potential for human judgement to improve predictive accuracy in the industrial learning‐curve context. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

4.
In order to provide short‐run forecasts of headline and core HICP inflation for France, we assess the forecasting performance of a large set of economic indicators, individually and jointly, as well as using dynamic factor models. We run out‐of‐sample forecasts implementing the Stock and Watson (1999) methodology. We find that, according to usual statistical criteria, the combination of several indicators—in particular those derived from surveys—provides better results than factor models, even after pre‐selection of the variables included in the panel. However, factors included in VAR models exhibit more stable forecasting performance over time. Results for the HICP excluding unprocessed food and energy are very encouraging. Moreover, we show that the aggregation of forecasts on subcomponents exhibits the best performance for projecting total inflation and that it is robust to data snooping. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
Longevity risk has become one of the major risks facing the insurance and pensions markets globally. The trade in longevity risk is underpinned by accurate forecasting of mortality rates. Using techniques from macroeconomic forecasting we propose a dynamic factor model of mortality that fits and forecasts age‐specific mortality rates parsimoniously. We compare the forecasting quality of this model against the Lee–Carter model and its variants. Our results show the dynamic factor model generally provides superior forecasts when applied to international mortality data. We also show that existing multifactorial models have superior fit but their forecasting performance worsens as more factors are added. The dynamic factor approach used here can potentially be further improved upon by applying an appropriate stopping rule for the number of static and dynamic factors. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
This paper examines the relative importance of allowing for time‐varying volatility and country interactions in a forecast model of economic activity. Allowing for these issues is done by augmenting autoregressive models of growth with cross‐country weighted averages of growth and the generalized autoregressive conditional heteroskedasticity framework. The forecasts are evaluated using statistical criteria through point and density forecasts, and an economic criterion based on forecasting recessions. The results show that, compared to an autoregressive model, both components improve forecast ability in terms of point and density forecasts, especially one‐period‐ahead forecasts, but that the forecast ability is not stable over time. The random walk model, however, still dominates in terms of forecasting recessions.  相似文献   

8.
We use real‐time macroeconomic variables and combination forecasts with both time‐varying weights and equal weights to forecast inflation in the USA. The combination forecasts compare three sets of commonly used time‐varying coefficient autoregressive models: Gaussian distributed errors, errors with stochastic volatility, and errors with moving average stochastic volatility. Both point forecasts and density forecasts suggest that models combined by equal weights do not produce worse forecasts than those with time‐varying weights. We also find that variable selection, the allowance of time‐varying lag length choice, and the stochastic volatility specification significantly improve forecast performance over standard benchmarks. Finally, when compared with the Survey of Professional Forecasters, the results of the best combination model are found to be highly competitive during the 2007/08 financial crisis.  相似文献   

9.
This paper examines the predictive relationship of consumption‐related and news‐related Google Trends data to changes in private consumption in the USA. The results suggest that (1) Google Trends‐augmented models provide additional information about consumption over and above survey‐based consumer sentiment indicators, (2) consumption‐related Google Trends data provide information about pre‐consumption research trends, (3) news‐related Google Trends data provide information about changes in durable goods consumption, and (4) the combination of news and consumption‐related data significantly improves forecasting models. We demonstrate that applying these insights improves forecasts of private consumption growth over forecasts that do not utilize Google Trends data and over forecasts that use Google Trends data, but do not take into account the specific ways in which it informs forecasts.  相似文献   

10.
Data are now readily available for a very large number of macroeconomic variables that are potentially useful when forecasting. We argue that recent developments in the theory of dynamic factor models enable such large data sets to be summarized by relatively few estimated factors, which can then be used to improve forecast accuracy. In this paper we construct a large macroeconomic data set for the UK, with about 80 variables, model it using a dynamic factor model, and compare the resulting forecasts with those from a set of standard time‐series models. We find that just six factors are sufficient to explain 50% of the variability of all the variables in the data set. These factors, which can be shown to be related to key variables in the economy, and their use leads to considerable improvements upon standard time‐series benchmarks in terms of forecasting performance. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
Temperature changes are known to affect the social and environmental determinants of health in various ways. Consequently, excess deaths as a result of extreme weather conditions may increase over the coming decades because of climate change. In this paper, the relationship between trends in mortality and trends in temperature change (as a proxy) is investigated using annual data and for specified (warm and cold) periods during the year in the UK. A thoughtful statistical analysis is implemented and a new stochastic, central mortality rate model is proposed. The new model encompasses the good features of the Lee and Carter (Journal of the American Statistical Association, 1992, 87: 659–671) model and its recent extensions, and for the very first time includes an exogenous factor which is a temperature‐related factor. The new model is shown to provide a significantly better‐fitting performance and more interpretable forecasts. An illustrative example of pricing a life insurance product is provided and discussed.  相似文献   

13.
This study empirically examines the role of macroeconomic and stock market variables in the dynamic Nelson–Siegel framework with the purpose of fitting and forecasting the term structure of interest rate on the Japanese government bond market. The Nelson–Siegel type models in state‐space framework considerably outperform the benchmark simple time series forecast models such as an AR(1) and a random walk. The yields‐macro model incorporating macroeconomic factors leads to a better in‐sample fit of the term structure than the yields‐only model. The out‐of‐sample predictability of the former for short‐horizon forecasts is superior to the latter for all maturities examined in this study, and for longer horizons the former is still compatible to the latter. Inclusion of macroeconomic factors can dramatically reduce the autocorrelation of forecast errors, which has been a common phenomenon of statistical analysis in previous term structure models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
Reliable correlation forecasts are of paramount importance in modern risk management systems. A plethora of correlation forecasting models have been proposed in the open literature, yet their impact on the accuracy of value‐at‐risk calculations has not been explicitly investigated. In this paper, traditional and modern correlation forecasting techniques are compared using standard statistical and risk management loss functions. Three portfolios consisting of stocks, bonds and currencies are considered. We find that GARCH models can better account for the correlation's dynamic structure in the stock and bond portfolios. On the other hand, simpler specifications such as the historical mean model or simple moving average models are better suited for the currency portfolio. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
Recently, analysts' cash flow forecasts have become widely available through financial information services. Cash flow information enables practitioners to better understand the real operating performance and financial stability of a company, particularly when earnings information is noisy and of low quality. However, research suggests that analysts' cash flow forecasts are less accurate and more dispersed than earnings forecasts. We thus investigate factors influencing cash flow forecast accuracy and build a practical model to distinguish more accurate from less accurate cash flow forecasters, using past cash flow forecast accuracy and analyst characteristics. We find significant power in our cash flow forecast accuracy prediction models. We also find that analysts develop cash flow‐specific forecasting expertise and knowhow, which are distinct from those that analysts acquire from forecasting earnings. In particular, cash flow‐specific information is more useful in identifying accurate cash flow forecasters than earnings‐specific information.Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
Recent research has suggested that forecast evaluation on the basis of standard statistical loss functions could prefer models which are sub‐optimal when used in a practical setting. This paper explores a number of statistical models for predicting the daily volatility of several key UK financial time series. The out‐of‐sample forecasting performance of various linear and GARCH‐type models of volatility are compared with forecasts derived from a multivariate approach. The forecasts are evaluated using traditional metrics, such as mean squared error, and also by how adequately they perform in a modern risk management setting. We find that the relative accuracies of the various methods are highly sensitive to the measure used to evaluate them. Such results have implications for any econometric time series forecasts which are subsequently employed in financial decision making. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

17.
This paper examines whether the disaggregation of consumer sentiment data into its sub‐components improves the real‐time capacity to forecast GDP and consumption. A Bayesian error correction approach augmented with the consumer sentiment index and permutations of the consumer sentiment sub‐indices is used to evaluate forecasting power. The forecasts are benchmarked against both composite forecasts and forecasts from standard error correction models. Using Australian data, we find that consumer sentiment data increase the accuracy of GDP and consumption forecasts, with certain components of consumer sentiment consistently providing better forecasts than aggregate consumer sentiment data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
The most up‐to‐date annual average daily traffic (AADT) is always required for transport model development and calibration. However, the current‐year AADT data are not always available. The short‐term traffic flow forecasting models can be used to predict the traffic flows for the current year. In this paper, two non‐parametric models, non‐parametric regression (NPR) and Gaussian maximum likelihood (GML), are chosen for short‐term traffic forecasting based on historical data collected for the annual traffic census (ATC) in Hong Kong. These models are adapted as they are more flexible and efficient in forecasting the daily vehicular flows in the Hong Kong ATC core stations (in total of 87 stations). The daily vehicular flows predicted by these models are then used to calculate the AADT of the current year, 1999. The overall prediction and comparison results show that the NPR model produces better forecasts than the GML model using the ATC data in Hong Kong. Copyright © 2006 John Wiley _ Sons, Ltd.  相似文献   

19.
The availability of numerous modeling approaches for volatility forecasting leads to model uncertainty for both researchers and practitioners. A large number of studies provide evidence in favor of combination methods for forecasting a variety of financial variables, but most of them are implemented on returns forecasting and evaluate their performance based solely on statistical evaluation criteria. In this paper, we combine various volatility forecasts based on different combination schemes and evaluate their performance in forecasting the volatility of the S&P 500 index. We use an exhaustive variety of combination methods to forecast volatility, ranging from simple techniques to time-varying techniques based on the past performance of the single models and regression techniques. We then evaluate the forecasting performance of single and combination volatility forecasts based on both statistical and economic loss functions. The empirical analysis in this paper yields an important conclusion. Although combination forecasts based on more complex methods perform better than the simple combinations and single models, there is no dominant combination technique that outperforms the rest in both statistical and economic terms.  相似文献   

20.
In this paper we assess opinion polls, prediction markets, expert opinion and statistical modelling over a large number of US elections in order to determine which perform better in terms of forecasting outcomes. In line with existing literature, we bias‐correct opinion polls. We consider accuracy, bias and precision over different time horizons before an election, and we conclude that prediction markets appear to provide the most precise forecasts and are similar in terms of bias to opinion polls. We find that our statistical model struggles to provide competitive forecasts, while expert opinion appears to be of value. Finally we note that the forecast horizon matters; whereas prediction market forecasts tend to improve the nearer an election is, opinion polls appear to perform worse, while expert opinion performs consistently throughout. We thus contribute to the growing literature comparing election forecasts of polls and prediction markets. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号