首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The difficulty in modelling inflation and the significance in discovering the underlying data‐generating process of inflation is expressed in an extensive literature regarding inflation forecasting. In this paper we evaluate nonlinear machine learning and econometric methodologies in forecasting US inflation based on autoregressive and structural models of the term structure. We employ two nonlinear methodologies: the econometric least absolute shrinkage and selection operator (LASSO) and the machine‐learning support vector regression (SVR) method. The SVR has never been used before in inflation forecasting considering the term spread as a regressor. In doing so, we use a long monthly dataset spanning the period 1871:1–2015:3 that covers the entire history of inflation in the US economy. For comparison purposes we also use ordinary least squares regression models as a benchmark. In order to evaluate the contribution of the term spread in inflation forecasting in different time periods, we measure the out‐of‐sample forecasting performance of all models using rolling window regressions. Considering various forecasting horizons, the empirical evidence suggests that the structural models do not outperform the autoregressive ones, regardless of the model's method. Thus we conclude that the term spread models are not more accurate than autoregressive models in inflation forecasting. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
This paper considers the problem of forecasting high‐dimensional time series. It employs a robust clustering approach to perform classification of the component series. Each series within a cluster is assumed to follow the same model and the data are then pooled for estimation. The classification is model‐based and robust to outlier contamination. The robustness is achieved by using the intrinsic mode functions of the Hilbert–Huang transform at lower frequencies. These functions are found to be robust to outlier contamination. The paper also compares out‐of‐sample forecast performance of the proposed method with several methods available in the literature. The other forecasting methods considered include vector autoregressive models with ∕ without LASSO, group LASSO, principal component regression, and partial least squares. The proposed method is found to perform well in out‐of‐sample forecasting of the monthly unemployment rates of 50 US states. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
Successful market timing strategies depend on superior forecasting ability. We use a sentiment index model, a kitchen sink logistic regression model, and a machine learning model (least absolute shrinkage and selection operator, LASSO) to forecast 1‐month‐ahead S&P 500 Index returns. In order to determine how successful each strategy is at forecasting the market direction, a “beta optimization” strategy is implemented. We find that the LASSO model outperforms the other models with consistently higher annual returns and lower monthly drawdowns.  相似文献   

4.
A large number of models have been developed in the literature to analyze and forecast changes in output dynamics. The objective of this paper was to compare the predictive ability of univariate and bivariate models, in terms of forecasting US gross national product (GNP) growth at different forecasting horizons, with the bivariate models containing information on a measure of economic uncertainty. Based on point and density forecast accuracy measures, as well as on equal predictive ability (EPA) and superior predictive ability (SPA) tests, we evaluate the relative forecasting performance of different model specifications over the quarterly period of 1919:Q2 until 2014:Q4. We find that the economic policy uncertainty (EPU) index should improve the accuracy of US GNP growth forecasts in bivariate models. We also find that the EPU exhibits similar forecasting ability to the term spread and outperforms other uncertainty measures such as the volatility index and geopolitical risk in predicting US recessions. While the Markov switching time‐varying parameter vector autoregressive model yields the lowest values for the root mean squared error in most cases, we observe relatively low values for the log predictive density score, when using the Bayesian vector regression model with stochastic volatility. More importantly, our results highlight the importance of uncertainty in forecasting US GNP growth rates.  相似文献   

5.
Dynamic model averaging (DMA) is used extensively for the purpose of economic forecasting. This study extends the framework of DMA by introducing adaptive learning from model space. In the conventional DMA framework all models are estimated independently and hence the information of the other models is left unexploited. In order to exploit the information in the estimation of the individual time‐varying parameter models, this paper proposes not only to average over the forecasts but, in addition, also to dynamically average over the time‐varying parameters. This is done by approximating the mixture of individual posteriors with a single posterior, which is then used in the upcoming period as the prior for each of the individual models. The relevance of this extension is illustrated in three empirical examples involving forecasting US inflation, US consumption expenditures, and forecasting of five major US exchange rate returns. In all applications adaptive learning from model space delivers improvements in out‐of‐sample forecasting performance.  相似文献   

6.
In this paper, we assess the predictive content of latent economic policy uncertainty and data surprise factors for forecasting and nowcasting gross domestic product (GDP) using factor-type econometric models. Our analysis focuses on five emerging market economies: Brazil, Indonesia, Mexico, South Africa, and Turkey; and we carry out a forecasting horse race in which predictions from various different models are compared. These models may (or may not) contain latent uncertainty and surprise factors constructed using both local and global economic datasets. The set of models that we examine in our experiments includes both simple benchmark linear econometric models as well as dynamic factor models that are estimated using a variety of frequentist and Bayesian data shrinkage methods based on the least absolute shrinkage operator (LASSO). We find that the inclusion of our new uncertainty and surprise factors leads to superior predictions of GDP growth, particularly when these latent factors are constructed using Bayesian variants of the LASSO. Overall, our findings point to the importance of spillover effects from global uncertainty and data surprises, when predicting GDP growth in emerging market economies.  相似文献   

7.
This paper proposes a mixed‐frequency error correction model for possibly cointegrated non‐stationary time series sampled at different frequencies. We highlight the impact, in terms of model specification, of the choice of the particular high‐frequency explanatory variable to be included in the cointegrating relationship, which we call a dynamic mixed‐frequency cointegrating relationship. The forecasting performance of aggregated models and several mixed‐frequency regressions are compared in a set of Monte Carlo experiments. In particular, we look at both the unrestricted mixed‐frequency model and at a more parsimonious MIDAS regression. Whereas the existing literature has only investigated the potential improvements of the MIDAS framework for stationary time series, our study emphasizes the need to include the relevant cointegrating vectors in the non‐stationary case. Furthermore, it is illustrated that the choice of dynamic mixed‐frequency cointegrating relationship does not matter as long as the short‐run dynamics are adapted accordingly. Finally, the unrestricted model is shown to suffer from parameter proliferation for samples of relatively small size, whereas MIDAS forecasts are robust to over‐parameterization. We illustrate our results for the US inflation rate. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

9.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
Do long‐run equilibrium relations suggested by economic theory help to improve the forecasting performance of a cointegrated vector error correction model (VECM)? In this paper we try to answer this question in the context of a two‐country model developed for the Canadian and US economies. We compare the forecasting performance of the exactly identified cointegrated VECMs to the performance of the over‐identified VECMs with the long‐run theory restrictions imposed. We allow for model uncertainty and conduct this comparison for every possible combination of the cointegration ranks of the Canadian and US models. We show that the over‐identified structural cointegrated models generally outperform the exactly identified models in forecasting Canadian macroeconomic variables. We also show that the pooled forecasts generated from the over‐identified models beat most of the individual exactly identified and over‐identified models as well as the VARs in levels and in differences. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
Several studies have tested for long‐range dependence in macroeconomic and financial time series but very few have assessed the usefulness of long‐memory models as forecast‐generating mechanisms. This study tests for fractional differencing in the US monetary indices (simple sum and divisia) and compares the out‐of‐sample fractional forecasts to benchmark forecasts. The long‐memory parameter is estimated using Robinson's Gaussian semi‐parametric and multivariate log‐periodogram methods. The evidence amply suggests that the monetary series possess a fractional order between one and two. Fractional out‐of‐sample forecasts are consistently more accurate (with the exception of the M3 series) than benchmark autoregressive forecasts but the forecasting gains are not generally statistically significant. In terms of forecast encompassing, the fractional model encompasses the autoregressive model for the divisia series but neither model encompasses the other for the simple sum series. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
Value‐at‐risk (VaR) forecasting generally relies on a parametric density function of portfolio returns that ignores higher moments or assumes them constant. In this paper, we propose a simple approach to forecasting of a portfolio VaR. We employ the Gram‐Charlier expansion (GCE) augmenting the standard normal distribution with the first four moments, which are allowed to vary over time. In an extensive empirical study, we compare the GCE approach to other models of VaR forecasting and conclude that it provides accurate and robust estimates of the realized VaR. In spite of its simplicity, on our dataset GCE outperforms other estimates that are generated by both constant and time‐varying higher‐moments models. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
Previous research found that the US business cycle leads the European one by a few quarters, and can therefore be useful in predicting euro area gross domestic product (GDP). In this paper we investigate whether additional predictive power can be gained by adding selected financial variables belonging to either the USA or the euro area. We use vector autoregressions (VARs) that include the US and euro area GDPs as well as growth in the Rest of the World and selected combinations of financial variables. Out‐of‐sample root mean square forecast errors (RMSEs) evidence that adding financial variables produces a slightly smaller error in forecasting US economic activity. This weak macro‐financial linkage is even weaker in the euro area, where financial indicators do not improve short‐ and medium‐term GDP forecasts even when their timely availability relative to GDP is exploited. It can be conjectured that neither US nor European financial variables help predict euro area GDP as the US GDP has already embodied this information. However, we show that the finding that financial variables have no predictive power for future activity in the euro area relates to the unconditional nature of the RMSE metric. When forecasting ability is assessed as if in real time (i.e. conditionally on the information available at the time when forecasts are made), we find that models using financial variables would have been preferred in several episodes and in particular between 1999 and 2002. Copyright 2011 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper we suggest a framework to assess the degree of reliability of provisional estimates as forecasts of final data, and we re‐examine the question of the most appropriate way in which available data should be used for ex ante forecasting in the presence of a data‐revision process. Various desirable properties for provisional data are suggested, as well as procedures for testing them, taking into account the possible non‐stationarity of economic variables. For illustration, the methodology is applied to assess the quality of the US M1 data production process and to derive a conditional model whose performance in forecasting is then tested against other alternatives based on simple transformations of provisional data or of past final data. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper we consider a novel procedure to forecasting the US zero coupon bond yields for a continuum of maturities by using the methodology of nonparametric functional data analysis (NP‐FDA). We interpret the US yields as curves since the term structure of interest rates defines a relation between the yield of a bond and its maturity. Within the NP‐FDA approach, each curve is viewed as a functional random variable and the dynamics present in the sample are modeled without imposing any parametric structure. In order to evaluate forecast the performance of the proposed estimator, we consider forecast horizons h = 1,3,6,12… months and the results are compared with widely known benchmark models. Our estimates with NP‐FDA present predictive performance superior to its competitors in many situations considered, especially for short‐term maturities. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
We evaluate forecasting models of US business fixed investment spending growth over the recent 1995:1–2004:2 out‐of‐sample period. The forecasting models are based on the conventional Accelerator, Neoclassical, Average Q, and Cash‐Flow models of investment spending, as well as real stock prices and excess stock return predictors. The real stock price model typically generates the most accurate forecasts, and forecast‐encompassing tests indicate that this model contains most of the information useful for forecasting investment spending growth relative to the other models at longer horizons. In a robustness check, we also evaluate the forecasting performance of the models over two alternative out‐of‐sample periods: 1975:1–1984:4 and 1985:1–1994:4. A number of different models produce the most accurate forecasts over these alternative out‐of‐sample periods, indicating that while the real stock price model appears particularly useful for forecasting the recent behavior of investment spending growth, it may not continue to perform well in future periods. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper, we introduce the functional coefficient to heterogeneous autoregressive realized volatility (HAR‐RV) models to make the parameters change over time. A nonparametric statistic is developed to perform a specification test. The simulation results show that our test displays reliable size and good power. Using the proposed test, we find a significant time variation property of coefficients to the HAR‐RV models. Time‐varying parameter (TVP) models can significantly outperform their constant‐coefficient counterparts for longer forecasting horizons. The predictive ability of TVP models can be improved by accounting for VIX information. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
This paper addresses the issue of forecasting term structure. We provide a unified state‐space modeling framework that encompasses different existing discrete‐time yield curve models. Within such a framework we analyze the impact of two modeling choices, namely the imposition of no‐arbitrage restrictions and the size of the information set used to extract factors, on forecasting performance. Using US yield curve data, we find that both no‐arbitrage and large information sets help in forecasting but no model uniformly dominates the other. No‐arbitrage models are more useful at shorter horizons for shorter maturities. Large information sets are more useful at longer horizons and longer maturities. We also find evidence for a significant feedback from yield curve models to macroeconomic variables that could be exploited for macroeconomic forecasting. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
In multivariate volatility prediction, identifying the optimal forecasting model is not always a feasible task. This is mainly due to the curse of dimensionality typically affecting multivariate volatility models. In practice only a subset of the potentially available models can be effectively estimated, after imposing severe constraints on the dynamic structure of the volatility process. It follows that in most applications the working forecasting model can be severely misspecified. This situation leaves scope for the application of forecast combination strategies as a tool for improving the predictive accuracy. The aim of the paper is to propose some alternative combination strategies and compare their performances in forecasting high‐dimensional multivariate conditional covariance matrices for a portfolio of US stock returns. In particular, we will consider the combination of volatility predictions generated by multivariate GARCH models, based on daily returns, and dynamic models for realized covariance matrices, built from intra‐daily returns. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
We examine different approaches to forecasting monthly US employment growth in the presence of many potentially relevant predictors. We first generate simulated out‐of‐sample forecasts of US employment growth at multiple horizons using individual autoregressive distributed lag (ARDL) models based on 30 potential predictors. We then consider different methods from the extant literature for combining the forecasts generated by the individual ARDL models. Using the mean square forecast error (MSFE) metric, we investigate the performance of the forecast combining methods over the last decade, as well as five periods centered on the last five US recessions. Overall, our results show that a number of combining methods outperform a benchmark autoregressive model. Combining methods based on principal components exhibit the best overall performance, while methods based on simple averaging, clusters, and discount MSFE also perform well. On a cautionary note, some combining methods, such as those based on ordinary least squares, often perform quite poorly. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号