首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Forecasts from quarterly econometric models are typically revised on a monthly basis to reflect the information in current economic data. The revision process usually involves setting targets for the quarterly values of endogenous variables for which monthly observations are available and then altering the intercept terms in the quarterly forecasting model to achieve the target values. A formal statistical approach to the use of monthly data to update quarterly forecasts is described and the procedure is applied to the Michigan Quarterly Econometric Model of the US Economy. The procedure is evaluated in terms of both ex post and ex ante forecasting performance. The ex ante results for 1986 and 1987 indicate that the method is quite promising. With a few notable exceptions, the formal procedure produces forecasts of GNP growth that are very close to the published ex ante forecasts.  相似文献   

2.
The contribution of product and industry knowledge to the accuracy of sales forecasting was investigated by examining the company forecasts of a leading manufacturer and marketer of consumable products. The company forecasts of 18 products produced by a meeting of marketing, sales, and production personnel were compared with those generated by the same company personnel when denied specific product knowledge and with the forecasts of selected judgemental and statistical time series methods. Results indicated that product knowledge contributed significantly to forecast accuracy and that the forecast accuracy of company personnel who possessed industry forecasting knowledge (but not product knowledge) was not significantly different from the time series based methods. Furthermore, the company forecasts were more accurate than averages of the judgemental and statistical time series forecasts. These results point to the importance of specific product information to forecast accuracy and accordingly call into question the continuing strong emphasis on improving extrapolation techniques without consideration of the inclusion of non-time series knowledge.  相似文献   

3.
Data are now readily available for a very large number of macroeconomic variables that are potentially useful when forecasting. We argue that recent developments in the theory of dynamic factor models enable such large data sets to be summarized by relatively few estimated factors, which can then be used to improve forecast accuracy. In this paper we construct a large macroeconomic data set for the UK, with about 80 variables, model it using a dynamic factor model, and compare the resulting forecasts with those from a set of standard time‐series models. We find that just six factors are sufficient to explain 50% of the variability of all the variables in the data set. These factors, which can be shown to be related to key variables in the economy, and their use leads to considerable improvements upon standard time‐series benchmarks in terms of forecasting performance. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
The majority of model-based forecasting efforts today rely on relatively simple techniques of estimation and the subjective adjustment of the model's results to produce forecasts. Published forecasts reflect to a great extent the judgment of the forecaster rather than what the model by itself has to say about the future. This paper examines the role judgment plays in the process of producing a macroeconometric forecast. The debate over the use of adjustment constants to alter the statistical results of a model is outlined and an empirical analysis of forecasts generated by the Michigan Quarterly Econometric Model of the US economy is presented using a unique data set which isolates the role of judgment in the forecasting process.  相似文献   

5.
This paper finds the yield curve to have a well-performing ability to forecast the real gross domestic product growth in the USA, compared to professional forecasters and time series models. Past studies have different arguments concerning growth lags, structural breaks, and ultimately the ability of the yield curve to forecast economic growth. This paper finds such results to be dependent on the estimation and forecasting techniques employed. By allowing various interest rates to act as explanatory variables and various window sizes for the out-of-sample forecasts, significant forecasts from many window sizes can be found. These seemingly good forecasts may face issues, including persistent forecasting errors. However, by using statistical learning algorithms, such issues can be cured to some extent. The overall result suggests, by scientifically deciding the window sizes, interest rate data, and learning algorithms, many outperforming forecasts can be produced for all lags from one quarter to 3 years, although some may be worse than the others due to the irreducible noise of the data.  相似文献   

6.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

7.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
The use of expert judgement is an important part of demographic forecasting. However, because judgement enters into the forecasting process in an informal way, it has been very difficult to assess its role relative to the analysis of past data. The use of targets in demographic forecasts permits us to embed the subjective forecasting process into a simple time-series regression model, in which expert judgement is incorporated via mixed estimation. The strength of expert judgement is denned, and estimated using the official forecasts of cause-specific mortality in the United States. We show that the weight given to judgement varies in an improbable manner by age. Overall, the weight given to judgement appears too high. An alternative approach to combining expert judgement and past data is suggested.  相似文献   

9.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
The purpose of this study is first, to demonstrate how multivariate forecasting models can be effectively used to generate high performance forecasts for typical business applications. Second, this study compares the forecasts generated by a simultaneous transfer function model (STF) model and a white noise regression model with that of a univariate ARIMA model. The accuracy of these forecasting models is judged using their residual variances and forecasting errors in a post-sample period. It is found that ignoring the residual serial correlation can greatly degrade the forecasting performance of a multi-variable model, and in some situations, cause a multi-variable model to perform inferior to a univariate ARIMA model. This paper also demonstrates how a forecaster can use an STF model to compute both the multi-step ahead forecasts and their variances easily.  相似文献   

11.
This paper focuses on the effects of disaggregation on forecast accuracy for nonstationary time series using dynamic factor models. We compare the forecasts obtained directly from the aggregated series based on its univariate model with the aggregation of the forecasts obtained for each component of the aggregate. Within this framework (first obtain the forecasts for the component series and then aggregate the forecasts), we try two different approaches: (i) generate forecasts from the multivariate dynamic factor model and (ii) generate the forecasts from univariate models for each component of the aggregate. In this regard, we provide analytical conditions for the equality of forecasts. The results are applied to quarterly gross domestic product (GDP) data of several European countries of the euro area and to their aggregated GDP. This will be compared to the prediction obtained directly from modeling and forecasting the aggregate GDP of these European countries. In particular, we would like to check whether long‐run relationships between the levels of the components are useful for improving the forecasting accuracy of the aggregate growth rate. We will make forecasts at the country level and then pool them to obtain the forecast of the aggregate. The empirical analysis suggests that forecasts built by aggregating the country‐specific models are more accurate than forecasts constructed using the aggregated data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Longevity risk has become one of the major risks facing the insurance and pensions markets globally. The trade in longevity risk is underpinned by accurate forecasting of mortality rates. Using techniques from macroeconomic forecasting we propose a dynamic factor model of mortality that fits and forecasts age‐specific mortality rates parsimoniously. We compare the forecasting quality of this model against the Lee–Carter model and its variants. Our results show the dynamic factor model generally provides superior forecasts when applied to international mortality data. We also show that existing multifactorial models have superior fit but their forecasting performance worsens as more factors are added. The dynamic factor approach used here can potentially be further improved upon by applying an appropriate stopping rule for the number of static and dynamic factors. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Recent research has suggested that forecast evaluation on the basis of standard statistical loss functions could prefer models which are sub‐optimal when used in a practical setting. This paper explores a number of statistical models for predicting the daily volatility of several key UK financial time series. The out‐of‐sample forecasting performance of various linear and GARCH‐type models of volatility are compared with forecasts derived from a multivariate approach. The forecasts are evaluated using traditional metrics, such as mean squared error, and also by how adequately they perform in a modern risk management setting. We find that the relative accuracies of the various methods are highly sensitive to the measure used to evaluate them. Such results have implications for any econometric time series forecasts which are subsequently employed in financial decision making. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

14.
We investigate the forecasting ability of the most commonly used benchmarks in financial economics. We approach the usual caveats of probabilistic forecasts studies—small samples, limited models, and nonholistic validations—by performing a comprehensive comparison of 15 predictive schemes during a time period of over 21 years. All densities are evaluated in terms of their statistical consistency, local accuracy and forecasting errors. Using a new composite indicator, the integrated forecast score, we show that risk‐neutral densities outperform historical‐based predictions in terms of information content. We find that the variance gamma model generates the highest out‐of‐sample likelihood of observed prices and the lowest predictive errors, whereas the GARCH‐based GJR‐FHS delivers the most consistent forecasts across the entire density range. In contrast, lognormal densities, the Heston model, or the nonparametric Breeden–Litzenberger formula yield biased predictions and are rejected in statistical tests.  相似文献   

15.
This paper estimates the ARIMA processes for the observed and expected price level corresponding to the three-level adaptive expectations model proposed by Jacobs and Jones (1980). These univariate processes are then compared with the best-fit ARIMA model. The results indicate that the best-fit model for the observed price level is a restricted version of the two-level adaptive learning process specified in terms of prices, suggesting a simple adaptive rule in the inflation rate. A comparison of the time-series forecasts from the best-fit model with the mean responses to the ASA-NBER survey shows no significant difference in their accuracy. The time-series forecasts are, however, conditionally efficient. The best-fit ARIMA model for expected prices measured by the ASA-NBER consensus forecasts does not correspond to any version of the Jacobs and Jones model.  相似文献   

16.
In this paper we assess opinion polls, prediction markets, expert opinion and statistical modelling over a large number of US elections in order to determine which perform better in terms of forecasting outcomes. In line with existing literature, we bias‐correct opinion polls. We consider accuracy, bias and precision over different time horizons before an election, and we conclude that prediction markets appear to provide the most precise forecasts and are similar in terms of bias to opinion polls. We find that our statistical model struggles to provide competitive forecasts, while expert opinion appears to be of value. Finally we note that the forecast horizon matters; whereas prediction market forecasts tend to improve the nearer an election is, opinion polls appear to perform worse, while expert opinion performs consistently throughout. We thus contribute to the growing literature comparing election forecasts of polls and prediction markets. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
A new method is proposed for forecasting electricity load-duration curves. The approach first forecasts the load curve and then uses the resulting predictive densities to forecast the load-duration curve. A virtue of this procedure is that both load curves and load-duration curves can be predicted using the same model, and confidence intervals can be generated for both predictions. The procedure is applied to the problem of predicting New Zealand electricity consumption. A structural time-series model is used to forecast the load curve based on half-hourly data. The model is tailored to handle effects such as daylight savings, holidays and weekends, as well as trend, annual, weekly and daily cycles. Time-series methods, including Kalman filtering, smoothing and prediction, are used to fit the model and to achieve the desired forecasts of the load-duration curve.  相似文献   

18.
We evaluate residual projection strategies in the context of a large‐scale macro model of the euro area and smaller benchmark time‐series models. The exercises attempt to measure the accuracy of model‐based forecasts simulated both out‐of‐sample and in‐sample. Both exercises incorporate alternative residual‐projection methods, to assess the importance of unaccounted‐for breaks in forecast accuracy and off‐model judgement. Conclusions reached are that simple mechanical residual adjustments have a significant impact on forecasting accuracy irrespective of the model in use, likely due to the presence of breaks in trends in the data. The testing procedure and conclusions are applicable to a wide class of models and of general interest. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
Econometric prediction accuracy for personal income forecasts is examined for a region of the United States. Previously published regional structural equation model (RSEM) forecasts exist ex ante for the state of New Mexico and its three largest metropolitan statistical areas: Albuquerque, Las Cruces and Santa Fe. Quarterly data between 1983 and 2000 are utilized at the state level. For Albuquerque, annual data from 1983 through 1999 are used. For Las Cruces and Santa Fe, annual data from 1990 through 1999 are employed. Univariate time series, vector autoregressions and random walks are used as the comparison criteria against structural equation simulations. Results indicate that ex ante RSEM forecasts achieved higher accuracy than those simulations associated with univariate ARIMA and random walk benchmarks for the state of New Mexico. The track records of the structural econometric models for Albuquerque, Las Cruces and Santa Fe are less impressive. In some cases, VAR benchmarks prove more reliable than RSEM income forecasts. In other cases, the RSEM forecasts are less accurate than random walk alternatives. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
We consider a forecasting problem that arises when an intervention is expected to occur on an economic system during the forecast horizon. The time series model employed is seen as a statistical device that serves to capture the empirical regularities of the observed data on the variables of the system without relying on a particular theoretical structure. Either the deterministic or the stochastic structure of a vector autoregressive error correction model of the system is assumed to be affected by the intervention. The information about the intervention effect is just provided by some linear restrictions imposed on the future values of the variables involved. Formulas for restricted forecasts with intervention effects and their mean squared errors are derived as a particular case of Catlin's static updating theorem. An empirical illustration uses Mexican macroeconomic data on five variables and the restricted forecasts consider targets for years 2011–2014. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号