首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper shows that out‐of‐sample forecast comparisons can help prevent data mining‐induced overfitting. The basic results are drawn from simulations of a simple Monte Carlo design and a real data‐based design similar to those used in some previous studies. In each simulation, a general‐to‐specific procedure is used to arrive at a model. If the selected specification includes any of the candidate explanatory variables, forecasts from the model are compared to forecasts from a benchmark model that is nested within the selected model. In particular, the competing forecasts are tested for equal MSE and encompassing. The simulations indicate most of the post‐sample tests are roughly correctly sized. Moreover, the tests have relatively good power, although some are consistently more powerful than others. The paper concludes with an application, modelling quarterly US inflation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
This paper proposes a new evaluation framework for interval forecasts. Our model‐free test can be used to evaluate interval forecasts and high‐density regions, potentially discontinuous and/or asymmetric. Using a simple J‐statistic, based on the moments defined by the orthonormal polynomials associated with the binomial distribution, this new approach presents many advantages. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypotheses. Third, Monte Carlo simulations show that for realistic sample sizes our GMM test has good small‐sample properties. These results are corroborated by an empirical application on SP500 and Nikkei stock market indexes. It confirms that using this GMM test leads to major consequences for the ex post evaluation of interval forecasts produced by linear versus nonlinear models. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
We compare linear autoregressive (AR) models and self‐exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two‐regime SETAR process is used as the data‐generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non‐linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

4.
Upon the evidence that infinite‐order vector autoregression setting is more realistic in time series models, we propose new model selection procedures for producing efficient multistep forecasts. They consist of order selection criteria involving the sample analog of the asymptotic approximation of the h‐step‐ahead forecast mean squared error matrix, where h is the forecast horizon. These criteria are minimized over a truncation order nT under the assumption that an infinite‐order vector autoregression can be approximated, under suitable conditions, with a sequence of truncated models, where nT is increasing with sample size. Using finite‐order vector autoregressive models with various persistent levels and realistic sample sizes, Monte Carlo simulations show that, overall, our criteria outperform conventional competitors. Specifically, they tend to yield better small‐sample distribution of the lag‐order estimates around the true value, while estimating it with relatively satisfactory probabilities. They also produce more efficient multistep (and even stepwise) forecasts since they yield the lowest h‐step‐ahead forecast mean squared errors for the individual components of the holding pseudo‐data to forecast. Thus estimating the actual autoregressive order as well as the best forecasting model can be achieved with the same selection procedure. Such results stand in sharp contrast to the belief that parsimony is a virtue in itself, and state that the relative accuracy of strongly consistent criteria such as the Schwarz information criterion, as claimed in the literature, is overstated. Our criteria are new tools extending those previously existing in the literature and hence can suitably be used for various practical situations when necessary. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
This study examines the problem of forecasting an aggregate of cointegrated disaggregates. It first establishes conditions under which forecasts of an aggregate variable obtained from a disaggregate VECM will be equal to those from an aggregate, univariate time series model, and develops a simple procedure for testing those conditions. The paper then uses Monte Carlo simulations to show, for a finite sample, that the proposed test has good size and power properties and that whether a model satisfies the aggregation conditions is closely related to out‐of‐sample forecast performance. The paper then shows that ignoring cointegration and specifying the disaggregate model as a VAR in differences can significantly affect analyses of aggregation, with the VAR‐based test for aggregation possibly leading to faulty inference and the differenced VAR forecasts potentially understating the benefits of disaggregate information. Finally, analysis of an empirical problem confirms the basic results. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

6.
This paper investigates whether and to what extent multiple encompassing tests may help determine weights for forecast averaging in a standard vector autoregressive setting. To this end we consider a new test‐based procedure, which assigns non‐zero weights to candidate models that add information not covered by other models. The potential benefits of this procedure are explored in extensive Monte Carlo simulations using realistic designs that are adapted to UK and to French macroeconomic data, to which trivariate vector autoregressions (VAR) are fitted. Thus simulations rely on potential data‐generating mechanisms for macroeconomic data rather than on simple but artificial designs. We run two types of forecast ‘competitions’. In the first one, one of the model classes is the trivariate VAR, such that it contains the generating mechanism. In the second specification, none of the competing models contains the true structure. The simulation results show that the performance of test‐based averaging is comparable to uniform weighting of individual models. In one of our role model economies, test‐based averaging achieves advantages in small samples. In larger samples, pure prediction models outperform forecast averages. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
The theory of quasi-rational expectations was tested under the controlled conditions of the economics laboratory. Five experiments were conducted with a variety of stochastic processes. In each experiment, subjects produced one-step-ahead forecasts of the variable generated by a Monte Carlo process. Comparisons of the performance of an aggregate of subjects' forecasts versus an ARIMA model showed that for relatively simple series (such as those generated by autoregressive processes of first or second order) the aggregate forecast was indistinguishable from that of the model. These results lend support to the theory that forecasts from an ARIMA model can serve as substitutes for aggregate expectations in macroeconomic policy models under some conditions.  相似文献   

8.
Volatility forecasting remains an active area of research with no current consensus as to the model that provides the most accurate forecasts, though Hansen and Lunde (2005) have argued that in the context of daily exchange rate returns nothing can beat a GARCH(1,1) model. This paper extends that line of research by utilizing intra‐day data and obtaining daily volatility forecasts from a range of models based upon the higher‐frequency data. The volatility forecasts are appraised using four different measures of ‘true’ volatility and further evaluated using regression tests of predictive power, forecast encompassing and forecast combination. Our results show that the daily GARCH(1,1) model is largely inferior to all other models, whereas the intra‐day unadjusted‐data GARCH(1,1) model generally provides superior forecasts compared to all other models. Hence, while it appears that a daily GARCH(1,1) model can be beaten in obtaining accurate daily volatility forecasts, an intra‐day GARCH(1,1) model cannot be. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
Most economic forecast evaluations dating back 20 years show that professional forecasters add little to the forecasts generated by the simplest of models. Using various types of forecast error criteria, these evaluations usually conclude that the professional forecasts are little better than the no-change or ARIM A type forecast. It is our contention that this conclusion is mistaken because the conventional error criteria may not capture why forecasts are ma& or how they are used. Using forecast directional accuracy, the criterion which has been found to be highly correlated with profits in an interest rate setting, we find that professional GNP forecasts dominate the cheaper alternatives. Moreover, there appears to be no systematic relationship between this preferred criterion and the error measures used in previous studies.  相似文献   

10.
We test the extent to which political manoeuvrings can be the sources of measurement errors in forecasts. Our objective is to examine the forecast error based on a simple model in which we attempt to explain deviations between the March budget forecast and the November forecast, and deviations between the outcome and the March budget forecast in the UK. The analysis is based on forecasts made by the general government. We use the forecasts of the variables as alternatives to the outcomes. We also test for political spins in the GDP forecast updates and the GDP forecast errors. We find evidence of partisan and electoral effects in forecast updates and forecast errors. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
Standard statistical loss functions, such as mean‐squared error, are commonly used for evaluating financial volatility forecasts. In this paper, an alternative evaluation framework, based on probability scoring rules that can be more closely tailored to a forecast user's decision problem, is proposed. According to the decision at hand, the user specifies the economic events to be forecast, the scoring rule with which to evaluate these probability forecasts, and the subsets of the forecasts of particular interest. The volatility forecasts from a model are then transformed into probability forecasts of the relevant events and evaluated using the selected scoring rule and calibration tests. An empirical example using exchange rate data illustrates the framework and confirms that the choice of loss function directly affects the forecast evaluation results. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

12.
We contribute to recent research on the joint evaluation of the properties of macroeconomic forecasts in a multivariate setting. The specific property of forecasts that we are interested in is their joint efficiency. We study the joint efficiency of forecasts by means of multivariate random forests, which we use to model the links between forecast errors and predictor variables in a forecaster's information set. We then use permutation tests to study whether the Mahalanobis distance between the predicted forecast errors for the growth and inflation forecasts of four leading German economic research institutes and actual forecast errors is significantly smaller than under the null hypothesis of forecast efficiency. We reject joint efficiency in several cases, but also document heterogeneity across research institutes with regard to the joint efficiency of their forecasts.  相似文献   

13.
We use real‐time macroeconomic variables and combination forecasts with both time‐varying weights and equal weights to forecast inflation in the USA. The combination forecasts compare three sets of commonly used time‐varying coefficient autoregressive models: Gaussian distributed errors, errors with stochastic volatility, and errors with moving average stochastic volatility. Both point forecasts and density forecasts suggest that models combined by equal weights do not produce worse forecasts than those with time‐varying weights. We also find that variable selection, the allowance of time‐varying lag length choice, and the stochastic volatility specification significantly improve forecast performance over standard benchmarks. Finally, when compared with the Survey of Professional Forecasters, the results of the best combination model are found to be highly competitive during the 2007/08 financial crisis.  相似文献   

14.
A Bayesian vector autoregressive (BVAR) model is developed for the Connecticut economy to forecast the unemployment rate, nonagricultural employment, real personal income, and housing permits authorized. The model includes both national and state variables. The Bayesian prior is selected on the basis of the accuracy of the out-of-sample forecasts. We find that a loose prior generally produces more accurate forecasts. The out-of-sample accuracy of the BVAR forecasts is also compared with that of forecasts from an unrestricted VAR model and of benchmark forecasts generated from univariate ARIMA models. The BVAR model generally produces the most accurate short- and long-term out-of-sample forecasts for 1988 through 1992. It also correctly predicts the direction of change.  相似文献   

15.
Earnings forecasts have received a great deal of attention, much of which has centered on the comparative accuracy of judgmental and objective forecasting methods. Recently, studies have focused on the use of combinations of subjective and objective forecasts to improve forecast accuracy. This research offers an extension on this theme by subjectively modifying an objective forecast. Specifically, ARIMA forecasts are judgmentally adjusted by analysts using a structured approach based on Saaty's (1980) analytic hierarchy process. The results show that the accuracy of the unadjusted objective forecasts can be improved when judgmentally adjusted.  相似文献   

16.
This paper shows how to extract the density of information shocks from revisions of the Bank of England's inflation density forecasts. An information shock is defined in this paper as a random variable that contains the set of information made available between two consecutive forecasting exercises and that has been incorporated into a revised forecast for a fixed point event. Studying the moments of these information shocks can be useful in understanding how the Bank has changed its assessment of risks surrounding inflation in the light of new information, and how it has modified its forecasts accordingly. The variance of the information shock is interpreted in this paper as a new measure of ex ante inflation uncertainty that measures the uncertainty that the Bank anticipates information perceived in a particular quarter will pose on inflation. A measure of information absorption that indicates the approximate proportion of the information content in a revised forecast that is attributable to information made available since the last forecast release is also proposed.  相似文献   

17.
This paper investigates Bayesian forecasts for some cointegrated time series data. Suppose data are derived from some cointegrated model, but, an unrestricted vector autoregressive model, without including cointegrated conditions, is fitted; the implication of using an incorrect model will be investigated from the Bayesian forecasting viewpoint. For some special cointegrated data and under the diffuse prior assumption, it can be analytically proven that the posterior predictive distributions for both the true model and the fitted model are asymptotically the same for any future step. For a more general cointegrated model, examinations are performed via simulations. Some simulated results reveal that a reasonably unrestricted model will still provide a rather accurate forecast as long as the sample size is large enough or the forecasting period is not too far in the future. For a small sample size or for long‐term forecasting, more accurate forecasts are expected if the correct cointegrated model is actually applied. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

18.
This paper applies combining forecasts of air travel demand generated from the same model but over different estimation windows. The combination approach used resorts to Pesaran and Pick (Journal of Business Economics and Statistics 2011; 29 : 307–318), but the empirical application is extended in several ways. The forecasts are based on a seasonal Box–Jenkins model (SARIMA), which is adequate to forecast monthly air travel demand with distinct seasonal patterns at the largest German airport: Frankfurt am Main. Furthermore, forecasts with forecast horizons from 1 to 12 months ahead, which are based on different average estimation windows, expanding windows and single rolling windows, are compared with baseline forecasts based on an expanding window of the observations after a structural break. The forecast exercise shows that the average window forecasts mostly outperform the alternative single window forecasts. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
Recently, analysts' cash flow forecasts have become widely available through financial information services. Cash flow information enables practitioners to better understand the real operating performance and financial stability of a company, particularly when earnings information is noisy and of low quality. However, research suggests that analysts' cash flow forecasts are less accurate and more dispersed than earnings forecasts. We thus investigate factors influencing cash flow forecast accuracy and build a practical model to distinguish more accurate from less accurate cash flow forecasters, using past cash flow forecast accuracy and analyst characteristics. We find significant power in our cash flow forecast accuracy prediction models. We also find that analysts develop cash flow‐specific forecasting expertise and knowhow, which are distinct from those that analysts acquire from forecasting earnings. In particular, cash flow‐specific information is more useful in identifying accurate cash flow forecasters than earnings‐specific information.Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

20.
This paper is concerned with time-series forecasting based on the linear regression model in the presence of AR(1) disturbances. The standard approach is to estimate the AR(1) parameter, ρ, and then construct forecasts assuming the estimated value is the true value. We introduce a new approach which can be viewed as a weighted average of predictions assuming different values of ρ. The weights are proportional to the marginal likelihood of ρ. A Monte Carlo experiment was conducted to compare the new method with five more conventional predictors. Its results suggest that the new approach has a distinct edge over existing procedures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号