首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We evaluate residual projection strategies in the context of a large‐scale macro model of the euro area and smaller benchmark time‐series models. The exercises attempt to measure the accuracy of model‐based forecasts simulated both out‐of‐sample and in‐sample. Both exercises incorporate alternative residual‐projection methods, to assess the importance of unaccounted‐for breaks in forecast accuracy and off‐model judgement. Conclusions reached are that simple mechanical residual adjustments have a significant impact on forecasting accuracy irrespective of the model in use, likely due to the presence of breaks in trends in the data. The testing procedure and conclusions are applicable to a wide class of models and of general interest. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
Empirical studies in the area of sovereign debt have used statistical models singularly to predict the probability of debt rescheduling. Unfortunately, researchers have made few efforts to test the reliability of these model predictions or to identify a superior prediction model among competing models. This paper tested neural network, OLS, and logit models' predictive abilities regarding debt rescheduling of less developed countries (LDC). All models predicted well out‐of‐sample. The results demonstrated a consistent performance of all models, indicating that researchers and practitioners can rely on neural networks or on the traditional statistical models to give useful predictions. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
Bankruptcy prediction methods based on a semiparametric logit model are proposed for simple random (prospective) and case–control (choice‐based; retrospective) data. The unknown parameters and prediction probabilities in the model are estimated by the local likelihood approach, and the resulting estimators are analyzed through their asymptotic biases and variances. The semiparametric bankruptcy prediction methods using these two types of data are shown to be essentially equivalent. Thus our proposed prediction model can be directly applied to data sampled from the two important designs. One real data example and simulations confirm that our prediction method is more powerful than alternatives, in the sense of yielding smaller out‐of‐sample error rates. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper we compare the in‐sample fit and out‐of‐sample forecasting performance of no‐arbitrage quadratic, essentially affine and dynamic Nelson–Siegel term structure models. In total, 11 model variants are evaluated, comprising five quadratic, four affine and two Nelson–Siegel models. Recursive re‐estimation and out‐of‐sample 1‐, 6‐ and 12‐month‐ahead forecasts are generated and evaluated using monthly US data for yields observed at maturities of 1, 6, 12, 24, 60 and 120 months. Our results indicate that quadratic models provide the best in‐sample fit, while the best out‐of‐sample performance is generated by three‐factor affine models and the dynamic Nelson–Siegel model variants. Statistical tests fail to identify one single best forecasting model class. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
6.
Micro‐founded dynamic stochastic general equilibrium (DSGE) models appear to be particularly suited to evaluating the consequences of alternative macroeconomic policies. Recently, increasing efforts have been undertaken by policymakers to use these models for forecasting, although this proved to be problematic due to estimation and identification issues. Hybrid DSGE models have become popular for dealing with some of the model misspecifications and the trade‐off between theoretical coherence and empirical fit, thus allowing them to compete in terms of predictability with VAR models. However, DSGE and VAR models are still linear and they do not consider time variation in parameters that could account for inherent nonlinearities and capture the adaptive underlying structure of the economy in a robust manner. This study conducts a comparative evaluation of the out‐of‐sample predictive performance of many different specifications of DSGE models and various classes of VAR models, using datasets for the real GDP, the harmonized CPI and the nominal short‐term interest rate series in the euro area. Simple and hybrid DSGE models were implemented, including DSGE‐VAR and factor‐augmented DGSE, and tested against standard, Bayesian and factor‐augmented VARs. Moreover, a new state‐space time‐varying VAR model is presented. The total period spanned from 1970:Q1 to 2010:Q4 with an out‐of‐sample testing period of 2006:Q1–2010:Q4, which covers the global financial crisis and the EU debt crisis. The results of this study can be useful in conducting monetary policy analysis and macro‐forecasting in the euro area. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
This paper shows that out‐of‐sample forecast comparisons can help prevent data mining‐induced overfitting. The basic results are drawn from simulations of a simple Monte Carlo design and a real data‐based design similar to those used in some previous studies. In each simulation, a general‐to‐specific procedure is used to arrive at a model. If the selected specification includes any of the candidate explanatory variables, forecasts from the model are compared to forecasts from a benchmark model that is nested within the selected model. In particular, the competing forecasts are tested for equal MSE and encompassing. The simulations indicate most of the post‐sample tests are roughly correctly sized. Moreover, the tests have relatively good power, although some are consistently more powerful than others. The paper concludes with an application, modelling quarterly US inflation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

8.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
This study extends the affine dynamic Nelson–Siegel model for the inclusion of macroeconomic variables. Five macroeconomic variables are included in affine term structure model, derived under the arbitrage‐free restriction, to evaluate their role in the in‐sample fitting and out‐of‐sample forecasting of the term structure. We show that the relationship between the macroeconomic factors and yield data has an intuitive interpretation, and that there is interdependence between the yield and macroeconomic factors. Moreover, the macroeconomic factors significantly improve the forecast performance of the model. The affine Nelson–Siegel type models outperform the benchmark simple time series forecast models. The out‐of‐sample predictability of the affine Nelson–Siegel model with macroeconomic factors for the short horizon is superior to the simple affine yield model for all maturities, and for longer horizons the former is still compatible to the latter, particularly for medium and long maturities. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Transfer function or distributed lag models are commonly used in forecasting. The stability of a constant‐coefficient transfer function model, however, may become an issue for many economic variables due in part to the recent advance in technology and improvement in efficiency in data collection and processing. In this paper, we propose a simple functional‐coefficient transfer function model that can accommodate the changing environment. A likelihood ratio statistic is used to test the stability of a traditional transfer function model. We investigate the performance of the test statistic in the finite sample case via simulation. Using some well‐known examples, we demonstrate clearly that the proposed functional‐coefficient model can substantially improve the accuracy of out‐of‐sample forecasts. In particular, our simple modification results in a 25% reduction in the mean squared errors of out‐of‐sample one‐step‐ahead forecasts for the gas‐furnace data of Box and Jenkins. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

11.
Since volatility is perceived as an explicit measure of risk, financial economists have long been concerned with accurate measures and forecasts of future volatility and, undoubtedly, the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model has been widely used for doing so. It appears, however, from some empirical studies that the GARCH model tends to provide poor volatility forecasts in the presence of additive outliers. To overcome the forecasting limitation, this paper proposes a robust GARCH model (RGARCH) using least absolute deviation estimation and introduces a valuable estimation method from a practical point of view. Extensive Monte Carlo experiments substantiate our conjectures. As the magnitude of the outliers increases, the one‐step‐ahead forecasting performance of the RGARCH model has a more significant improvement in two forecast evaluation criteria over both the standard GARCH and random walk models. Strong evidence in favour of the RGARCH model over other competitive models is based on empirical application. By using a sample of two daily exchange rate series, we find that the out‐of‐sample volatility forecasts of the RGARCH model are apparently superior to those of other competitive models. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
This paper develops a New‐Keynesian Dynamic Stochastic General Equilibrium (NKDSGE) model for forecasting the growth rate of output, inflation, and the nominal short‐term interest rate (91 days Treasury Bill rate) for the South African economy. The model is estimated via maximum likelihood technique for quarterly data over the period of 1970:1–2000:4. Based on a recursive estimation using the Kalman filter algorithm, out‐of‐sample forecasts from the NKDSGE model are compared with forecasts generated from the classical and Bayesian variants of vector autoregression (VAR) models for the period 2001:1–2006:4. The results indicate that in terms of out‐of‐sample forecasting, the NKDSGE model outperforms both the classical and Bayesian VARs for inflation, but not for output growth and nominal short‐term interest rate. However, differences in RMSEs are not significant across the models. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
This paper employs a non‐parametric method to forecast high‐frequency Canadian/US dollar exchange rate. The introduction of a microstructure variable, order flow, substantially improves the predictive power of both linear and non‐linear models. The non‐linear models outperform random walk and linear models based on a number of recursive out‐of‐sample forecasts. Two main criteria that are applied to evaluate model performance are root mean squared error (RMSE) and the ability to predict the direction of exchange rate moves. The artificial neural network (ANN) model is consistently better in RMSE to random walk and linear models for the various out‐of‐sample set sizes. Moreover, ANN performs better than other models in terms of percentage of correctly predicted exchange rate changes. The empirical results suggest that optimal ANN architecture is superior to random walk and any linear competing model for high‐frequency exchange rate forecasting. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
We evaluate forecasting models of US business fixed investment spending growth over the recent 1995:1–2004:2 out‐of‐sample period. The forecasting models are based on the conventional Accelerator, Neoclassical, Average Q, and Cash‐Flow models of investment spending, as well as real stock prices and excess stock return predictors. The real stock price model typically generates the most accurate forecasts, and forecast‐encompassing tests indicate that this model contains most of the information useful for forecasting investment spending growth relative to the other models at longer horizons. In a robustness check, we also evaluate the forecasting performance of the models over two alternative out‐of‐sample periods: 1975:1–1984:4 and 1985:1–1994:4. A number of different models produce the most accurate forecasts over these alternative out‐of‐sample periods, indicating that while the real stock price model appears particularly useful for forecasting the recent behavior of investment spending growth, it may not continue to perform well in future periods. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

16.
In this article, we propose a regression model for sparse high‐dimensional data from aggregated store‐level sales data. The modeling procedure includes two sub‐models of topic model and hierarchical factor regressions. These are applied in sequence to accommodate high dimensionality and sparseness and facilitate managerial interpretation. First, the topic model is applied to aggregated data to decompose the daily aggregated sales volume of a product into sub‐sales for several topics by allocating each unit sale (“word” in text analysis) in a day (“document”) into a topic based on joint‐purchase information. This stage reduces the dimensionality of data inside topics because the topic distribution is nonuniform and product sales are mostly allocated into smaller numbers of topics. Next, the market response regression model for the topic is estimated from information about items in the same topic. The hierarchical factor regression model we introduce, based on canonical correlation analysis for original high‐dimensional sample spaces, further reduces the dimensionality within topics. Feature selection is then performed on the basis of the credible interval of the parameters' posterior density. Empirical results show that (i) our model allows managerial implications from topic‐wise market responses according to the particular context, and (ii) it performs better than do conventional category regressions in both in‐sample and out‐of‐sample forecasts.  相似文献   

17.
Recent research has suggested that forecast evaluation on the basis of standard statistical loss functions could prefer models which are sub‐optimal when used in a practical setting. This paper explores a number of statistical models for predicting the daily volatility of several key UK financial time series. The out‐of‐sample forecasting performance of various linear and GARCH‐type models of volatility are compared with forecasts derived from a multivariate approach. The forecasts are evaluated using traditional metrics, such as mean squared error, and also by how adequately they perform in a modern risk management setting. We find that the relative accuracies of the various methods are highly sensitive to the measure used to evaluate them. Such results have implications for any econometric time series forecasts which are subsequently employed in financial decision making. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

18.
The issues of non‐stationarity and long memory of real interest rates are examined here. Autoregressive models allowing short‐term mean reversion are compared with fractional integration models in terms of their ability to explain the behaviour of the data and to forecast out‐of‐sample. The data used are weekly observations of 3‐month Eurodeposit rates for 10 countries, adjusted for inflation, for 14 years. Following Brenner, Harjes and Kroner, the volatility of these rates is shown to both exhibit GARCH effects and depend on the level of interest rates. Although relatively little support is found for the hypothesis of mean reversion, evidence of long memory in interest rate changes is found for seven countries. The out‐of‐sample forecasting performance for a year ahead of the fractional integrated models was significantly better than a no change. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

19.
Motivated by the importance of coffee to Americans and the significance of the coffee subsector to the US economy, we pursue three notable innovations. First, we augment the traditional Phillips curve model with the coffee price as a predictor, and show that the resulting model outperforms the traditional variant in both in‐sample and out‐of‐sample predictability of US inflation. Second, we demonstrate the need to account for the inherent statistical features of predictors such as persistence, endogeneity, and conditional heteroskedasticity effects when dealing with US inflation. Consequently, we offer robust illustrations to show that the choice of estimator matters for improved US inflation forecasts. Third, the proposed augmented Phillips curve also outperforms time series models such as autoregressive integrated moving average and the fractionally integrated version for both in‐sample and out‐of‐sample forecasts. Our results show that augmenting the traditional Phillips curve with the urban coffee price will produce better forecast results for US inflation only when the statistical effects are captured in the estimation process. Our results are robust to alternative measures of inflation, different data frequencies, higher order moments, multiple data samples and multiple forecast horizons.  相似文献   

20.
This paper considers the problem of forecasting high‐dimensional time series. It employs a robust clustering approach to perform classification of the component series. Each series within a cluster is assumed to follow the same model and the data are then pooled for estimation. The classification is model‐based and robust to outlier contamination. The robustness is achieved by using the intrinsic mode functions of the Hilbert–Huang transform at lower frequencies. These functions are found to be robust to outlier contamination. The paper also compares out‐of‐sample forecast performance of the proposed method with several methods available in the literature. The other forecasting methods considered include vector autoregressive models with ∕ without LASSO, group LASSO, principal component regression, and partial least squares. The proposed method is found to perform well in out‐of‐sample forecasting of the monthly unemployment rates of 50 US states. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号