首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Forecast combination based on a model selection approach is discussed and evaluated. In addition, a combination approach based on ex ante predictive ability is outlined. The model selection approach which we examine is based on the use of Schwarz (SIC) or the Akaike (AIC) Information Criteria. Monte Carlo experiments based on combination forecasts constructed using possibly (misspecified) models suggest that the SIC offers a potentially useful combination approach, and that further investigation is warranted. For example, combination forecasts from a simple averaging approach MSE‐dominate SIC combination forecasts less than 25% of the time in most cases, while other ‘standard’ combination approaches fare even worse. Alternative combination approaches are also compared by conducting forecasting experiments using nine US macroeconomic variables. In particular, artificial neural networks (ANN), linear models, and professional forecasts are used to form real‐time forecasts of the variables, and it is shown via a series of experiments that SIC, t‐statistic, and averaging combination approaches dominate various other combination approaches. An additional finding is that while ANN models may not MSE‐dominate simpler linear models, combinations of forecasts from these two models outperform either individual forecast, for a subset of the economic variables examined. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
3.
The effect of an additive outlier upon the accuracy of forecasts derived from extrapolative methods is investigated. It is demonstrated that an outlier affects not only the accuracy of the forecasts at the time of occurrence but also subsequent forecasts. Methods to adjust for additive outliers are discussed. The results of the paper are illustrated with two examples.  相似文献   

4.
Recent studies have shown that composite forecasting produces superior forecasts when compared to individual forecasts. This paper extends the existing literature by employing linear constraints and robust regression techniques in composite model building. Security analysts forecasts may be improved when combined with time series forecasts for a diversified sample of 261 firms with a 1980-1982 post-sample estimation period. The mean square error of analyst forecasts may be reduced by combining analyst and univariate time series model forecasts in constrained and unconstrained ordinary least squares regression models. These reductions are very interesting when one finds that the univariate time series model forecasts do not substantially deviate from those produced by ARIMA (0,1,1) processes. Moreover, security analysts' forecast errors may be significantly reduced when constrained and unconstrained robust regression analyses are employed.  相似文献   

5.
In this paper we compare the out of sample forecasts from four alternative interest rate models based on expanding information sets. The random walk model is the most restrictive. The univariate time series model allows for a richer dynamic pattern and more conditioning information on own rates. The multivariate time series model permits a flexible dynamic pattern with own- and cross-series information. Finally, the forecasts from the MPS econometric model depend on the full model structure and information set. In theory, more information is preferred to less. In practice, complicated misspecified models can perform much worse than simple (also probably misspecified) models. For forecasts evaluated over the volatile 1970s the multivariate time series model forecasts are considerably better than those from simpler models which use less conditioning information, as well as forecasts from the MPS model which uses substantially more conditioning information but also imposes ‘structural’ economic restrictions.  相似文献   

6.
We compare models for forecasting growth and inflation in the enlarged euro area. Forecasts are built from univariate autoregressive and single‐equation models. The analysis is undertaken for both individual countries and EU aggregate variables. Aggregate forecasts are constructed by both employing aggregate variables and by aggregating country‐specific forecasts. Using financial variables for country‐specific forecasts tends to add little to the predictive ability of a simple AR model. However, they do help to predict EU aggregates. Furthermore, forecasts from pooling individual country models usually outperform those of the aggregate itself, particularly for the EU25 grouping. This is particularly interesting from the perspective of the European Central Bank, who require forecasts of economic activity and inflation to formulate appropriate economic policy across the enlarged group. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
This paper examines the relative importance of allowing for time‐varying volatility and country interactions in a forecast model of economic activity. Allowing for these issues is done by augmenting autoregressive models of growth with cross‐country weighted averages of growth and the generalized autoregressive conditional heteroskedasticity framework. The forecasts are evaluated using statistical criteria through point and density forecasts, and an economic criterion based on forecasting recessions. The results show that, compared to an autoregressive model, both components improve forecast ability in terms of point and density forecasts, especially one‐period‐ahead forecasts, but that the forecast ability is not stable over time. The random walk model, however, still dominates in terms of forecasting recessions.  相似文献   

8.
Econometric prediction accuracy for personal income forecasts is examined for a region of the United States. Previously published regional structural equation model (RSEM) forecasts exist ex ante for the state of New Mexico and its three largest metropolitan statistical areas: Albuquerque, Las Cruces and Santa Fe. Quarterly data between 1983 and 2000 are utilized at the state level. For Albuquerque, annual data from 1983 through 1999 are used. For Las Cruces and Santa Fe, annual data from 1990 through 1999 are employed. Univariate time series, vector autoregressions and random walks are used as the comparison criteria against structural equation simulations. Results indicate that ex ante RSEM forecasts achieved higher accuracy than those simulations associated with univariate ARIMA and random walk benchmarks for the state of New Mexico. The track records of the structural econometric models for Albuquerque, Las Cruces and Santa Fe are less impressive. In some cases, VAR benchmarks prove more reliable than RSEM income forecasts. In other cases, the RSEM forecasts are less accurate than random walk alternatives. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
In time-series analysis, a model is rarely pre-specified but rather is typically formulated in an iterative, interactive way using the given time-series data. Unfortunately the properties of the fitted model, and the forecasts from it, are generally calculated as if the model were known in the first place. This is theoretically incorrect, as least squares theory, for example, does not apply when the same data are used to formulates and fit a model. Ignoring prior model selection leads to biases, not only in estimates of model parameters but also in the subsequent construction of prediction intervals. The latter are typically too narrow, partly because they do not allow for model uncertainty. Empirical results also suggest that more complicated models tend to give a better fit but poorer ex-ante forecasts. The reasons behind these phenomena are reviewed. When comparing different forecasting models, the BIC is preferred to the AIC for identifying a model on the basis of within-sample fit, but out-of-sample forecasting accuracy provides the real test. Alternative approaches to forecasting, which avoid conditioning on a single model, include Bayesian model averaging and using a forecasting method which is not model-based but which is designed to be adaptable and robust.  相似文献   

10.
This paper utilizes for the first time age‐structured human capital data for economic growth forecasting. We concentrate on pooled cross‐country data of 65 countries over six 5‐year periods (1970–2000) and consider specifications chosen by model selection criteria, Bayesian model averaging methodologies based on in‐sample and out‐of‐sample goodness of fit and on adaptive regression by mixing. The results indicate that forecast averaging and exploiting the demographic dimension of education data improve economic growth forecasts systematically. In particular, the results are very promising for improving economic growth predictions in developing countries. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
This article applies the Bayesian Vector Auto-Regressive (BVAR) model to key economic aggregates of the EU-7, consisting of the former narrow-band ERM members plus Austria, and the EU-14. This model appears to be useful as an additional forecasting tool besides structural macroeconomic models, as is shown both by absolute forecasting performance and by a comparison of ex-post BVAR forecasts with forecasts by the OECD. A comparison of the aggregate models to single-country models reveals that pooling has a strong impact on forecast errors. If forecast errors are interpreted as shocks, shocks appear to be—at least in part—asymmetric, or countries react differently to shocks. © 1998 John Wiley & Sons, Ltd.  相似文献   

12.
This study uses Bayesian vector autoregressive models to examine the usefulness of survey data on households' buying attitudes for homes in predicting sales of homes. We find a negligible deterioration in the accuracy of forecasts of home sales when buying attitudes are dropped from a model that includes the price of homes, the mortgage rate, real personal disposable income, and die unemployment rate. This suggests that buying attitudes do not add much to the information contained in these variables. We also find that forecasts from the model that includes both buying attitudes and the aforementioned variables are similar to those generated from a model that excludes the survey data but contains the other variables. Additionally, the variance decompositions suggest that the gain from including the survey data in the model that already contains other economic variables is small.  相似文献   

13.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
Methods of time series forecasting are proposed which can be applied automatically. However, they are not rote formulae, since they are based on a flexible philosophy which can provide several models for consideration. In addition it provides diverse diagnostics for qualitatively and quantitatively estimating how well one can forecast a series. The models considered are called ARARMA models (or ARAR models) because the model fitted to a long memory time series (t) is based on sophisticated time series analysis of AR (or ARMA) schemes (short memory models) fitted to residuals Y(t) obtained by parsimonious‘best lag’non-stationary autoregression. Both long range and short range forecasts are provided by an ARARMA model Section 1 explains the philosophy of our approach to time series model identification. Sections 2 and 3 attempt to relate our approach to some standard approaches to forecasting; exponential smoothing methods are developed from the point of view of prediction theory (section 2) and extended (section 3). ARARMA models are introduced (section 4). Methods of ARARMA model fitting are outlined (sections 5,6). Since‘the proof of the pudding is in the eating’, the methods proposed are illustrated (section 7) using the classic example of international airline passengers.  相似文献   

15.
This paper examines a strategy for structuring one type of domain knowledge for use in extrapolation. It does so by representing information about causality and using this domain knowledge to select and combine forecasts. We use five categories to express causal impacts upon trends: growth, decay, supporting, opposing, and regressing. An identification of causal forces aided in the determination of weights for combining extrapolation forecasts. These weights improved average ex ante forecast accuracy when tested on 104 annual economic and demographic time series. Gains in accuracy were greatest when (1) the causal forces were clearly specified and (2) stronger causal effects were expected, as in longer-range forecasts. One rule suggested by this analysis was: ‘Do not extrapolate trends if they are contrary to causal forces.’ We tested this rule by comparing forecasts from a method that implicitly assumes supporting trends (Holt's exponential smoothing) with forecasts from the random walk. Use of the rule improved accuracy for 20 series where the trends were contrary; the MdAPE (Median Absolute Percentage Error) was 18% less for the random walk on 20 one-year ahead forecasts and 40% less for 20 six-year-ahead forecasts. We then applied the rule to four other data sets. Here, the MdAPE for the random walk forecasts was 17% less than Holt's error for 943 short-range forecasts and 43% less for 723 long-range forecasts. Our study suggests that the causal assumptions implicit in traditional extrapolation methods are inappropriate for many applications.  相似文献   

16.
A short‐term mixed‐frequency model is proposed to estimate and forecast Italian economic activity fortnightly. We introduce a dynamic one‐factor model with three frequencies (quarterly, monthly, and fortnightly) by selecting indicators that show significant coincident and leading properties and are representative of both demand and supply. We conduct an out‐of‐sample forecasting exercise and compare the prediction errors of our model with those of alternative models that do not include fortnightly indicators. We find that high‐frequency indicators significantly improve the real‐time forecasts of Italian gross domestic product (GDP); this result suggests that models exploiting the information available at different lags and frequencies provide forecasting gains beyond those based on monthly variables alone. Moreover, the model provides a new fortnightly indicator of GDP, consistent with the official quarterly series.  相似文献   

17.
Forecasts from quarterly econometric models are typically revised on a monthly basis to reflect the information in current economic data. The revision process usually involves setting targets for the quarterly values of endogenous variables for which monthly observations are available and then altering the intercept terms in the quarterly forecasting model to achieve the target values. A formal statistical approach to the use of monthly data to update quarterly forecasts is described and the procedure is applied to the Michigan Quarterly Econometric Model of the US Economy. The procedure is evaluated in terms of both ex post and ex ante forecasting performance. The ex ante results for 1986 and 1987 indicate that the method is quite promising. With a few notable exceptions, the formal procedure produces forecasts of GNP growth that are very close to the published ex ante forecasts.  相似文献   

18.
We introduce a long‐memory autoregressive conditional Poisson (LMACP) model to model highly persistent time series of counts. The model is applied to forecast quoted bid–ask spreads, a key parameter in stock trading operations. It is shown that the LMACP nicely captures salient features of bid–ask spreads like the strong autocorrelation and discreteness of observations. We discuss theoretical properties of LMACP models and evaluate rolling‐window forecasts of quoted bid–ask spreads for stocks traded at NYSE and NASDAQ. We show that Poisson time series models significantly outperform forecasts from AR, ARMA, ARFIMA, ACD and FIACD models. The economic significance of our results is supported by the evaluation of a trade schedule. Scheduling trades according to spread forecasts we realize cost savings of up to 14 % of spread transaction costs. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
We present a mixed‐frequency model for daily forecasts of euro area inflation. The model combines a monthly index of core inflation with daily data from financial markets; estimates are carried out with the MIDAS regression approach. The forecasting ability of the model in real time is compared with that of standard VARs and of daily quotes of economic derivatives on euro area inflation. We find that the inclusion of daily variables helps to reduce forecast errors with respect to models that consider only monthly variables. The mixed‐frequency model also displays superior predictive performance with respect to forecasts solely based on economic derivatives. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
On 26 November 2001, the National Bureau of Economic Research announced that the US economy had officially entered into a recession in March 2001. This decision was a surprise and did not end all the conflicting opinions expressed by economists. This matter was finally settled in July 2002 after a revision to the 2001 real gross domestic product showed negative growth rates for its first three quarters. A series of political and economic events in the years 2000–01 have increased the amount of uncertainty in the state of the economy, which in turn has resulted in the production of less reliable economic indicators and forecasts. This paper evaluates the performance of two very reliable methodologies for predicting a downturn in the US economy using composite leading economic indicators (CLI) for the years 2000–01. It explores the impact of the monetary policy on CLI and on the overall economy and shows how the gradualness and uncertainty of this impact on the overall economy have affected the forecasts of these methodologies. It suggests that the overexposure of the CLI to the monetary policy tools and a strong, but less effective, expansionary money policy have been the major factors in deteriorating the predictions of these methodologies. To improve these forecasts, it has explored the inclusion of the CLI diffusion index as a prior in the Bayesian methodology. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号