首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study extends the affine dynamic Nelson–Siegel model for the inclusion of macroeconomic variables. Five macroeconomic variables are included in affine term structure model, derived under the arbitrage‐free restriction, to evaluate their role in the in‐sample fitting and out‐of‐sample forecasting of the term structure. We show that the relationship between the macroeconomic factors and yield data has an intuitive interpretation, and that there is interdependence between the yield and macroeconomic factors. Moreover, the macroeconomic factors significantly improve the forecast performance of the model. The affine Nelson–Siegel type models outperform the benchmark simple time series forecast models. The out‐of‐sample predictability of the affine Nelson–Siegel model with macroeconomic factors for the short horizon is superior to the simple affine yield model for all maturities, and for longer horizons the former is still compatible to the latter, particularly for medium and long maturities. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
This study empirically examines the role of macroeconomic and stock market variables in the dynamic Nelson–Siegel framework with the purpose of fitting and forecasting the term structure of interest rate on the Japanese government bond market. The Nelson–Siegel type models in state‐space framework considerably outperform the benchmark simple time series forecast models such as an AR(1) and a random walk. The yields‐macro model incorporating macroeconomic factors leads to a better in‐sample fit of the term structure than the yields‐only model. The out‐of‐sample predictability of the former for short‐horizon forecasts is superior to the latter for all maturities examined in this study, and for longer horizons the former is still compatible to the latter. Inclusion of macroeconomic factors can dramatically reduce the autocorrelation of forecast errors, which has been a common phenomenon of statistical analysis in previous term structure models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
This paper compares the in‐sample fitting and the out‐of‐sample forecasting performances of four distinct Nelson–Siegel class models: Nelson–Siegel, Bliss, Svensson, and a five‐factor model we propose in order to enhance the fitting flexibility. The introduction of the fifth factor resulted in superior adjustment to the data. For the forecasting exercise the paper contrasts the performances of the term structure models in association with the following econometric methods: quantile autoregression evaluated at the median, VAR, AR, and a random walk. As a pattern, the quantile procedure delivered the best results for longer forecasting horizons. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short‐term interest rates from October 2008. Out‐of‐sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson–Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium‐ to longer‐term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near‐zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson–Siegel models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
In this paper we compare the in‐sample fit and out‐of‐sample forecasting performance of no‐arbitrage quadratic, essentially affine and dynamic Nelson–Siegel term structure models. In total, 11 model variants are evaluated, comprising five quadratic, four affine and two Nelson–Siegel models. Recursive re‐estimation and out‐of‐sample 1‐, 6‐ and 12‐month‐ahead forecasts are generated and evaluated using monthly US data for yields observed at maturities of 1, 6, 12, 24, 60 and 120 months. Our results indicate that quadratic models provide the best in‐sample fit, while the best out‐of‐sample performance is generated by three‐factor affine models and the dynamic Nelson–Siegel model variants. Statistical tests fail to identify one single best forecasting model class. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
This paper investigates the sensitivity of out‐of‐sample forecasting performance over a span of different parameters of l in the dynamic Nelson–Siegel three‐factor AR(1) model. First, we find that the ad hoc selection of l is not optimal. Second, we find a substantial difference in factor dynamics between investment‐grade and speculative‐grade corporate bonds from 1994:12 to 2006: 4. Third, we suggest that the three‐factor model is sufficient to explain the main variations of corporate yield changes. Finally, the parsimonious Nelson–Siegel three‐factor AR(1) model remains competitive in the out‐of‐sample forecasting of corporate yields. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
We explore the benefits of forecast combinations based on forecast‐encompassing tests compared to simple averages and to Bates–Granger combinations. We also consider a new combination algorithm that fuses test‐based and Bates–Granger weighting. For a realistic simulation design, we generate multivariate time series samples from a macroeconomic DSGE‐VAR (dynamic stochastic general equilibrium–vector autoregressive) model. Results generally support Bates–Granger over uniform weighting, whereas benefits of test‐based weights depend on the sample size and on the prediction horizon. In a corresponding application to real‐world data, simple averaging performs best. Uniform averages may be the weighting scheme that is most robust to empirically observed irregularities. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
This article discusses the use of Bayesian methods for inference and forecasting in dynamic term structure models through integrated nested Laplace approximations (INLA). This method of analytical approximation allows accurate inferences for latent factors, parameters and forecasts in dynamic models with reduced computational cost. In the estimation of dynamic term structure models it also avoids some simplifications in the inference procedures, such as the inefficient two‐step ordinary least squares (OLS) estimation. The results obtained in the estimation of the dynamic Nelson–Siegel model indicate that this method performs more accurate out‐of‐sample forecasts compared to the methods of two‐stage estimation by OLS and also Bayesian estimation methods using Markov chain Monte Carlo (MCMC). These analytical approaches also allow efficient calculation of measures of model selection such as generalized cross‐validation and marginal likelihood, which may be computationally prohibitive in MCMC estimations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
In recent years, factor models have received increasing attention from both econometricians and practitioners in the forecasting of macroeconomic variables. In this context, Bai and Ng (Journal of Econometrics 2008; 146 : 304–317) find an improvement in selecting indicators according to the forecast variable prior to factor estimation (targeted predictors). In particular, they propose using the LARS‐EN algorithm to remove irrelevant predictors. In this paper, we adapt the Bai and Ng procedure to a setup in which data releases are delayed and staggered. In the pre‐selection step, we replace actual data with estimates obtained on the basis of past information, where the structure of the available information replicates the one a forecaster would face in real time. We estimate on the reduced dataset the dynamic factor model of Giannone et al. (Journal of Monetary Economics 2008; 55 : 665–676) and Doz et al. (Journal of Econometrics 2011; 164 : 188–205), which is particularly suitable for the very short‐term forecast of GDP. A pseudo real‐time evaluation on French data shows the potential of our approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
We utilize mixed‐frequency factor‐MIDAS models for the purpose of carrying out backcasting, nowcasting, and forecasting experiments using real‐time data. We also introduce a new real‐time Korean GDP dataset, which is the focus of our experiments. The methodology that we utilize involves first estimating common latent factors (i.e., diffusion indices) from 190 monthly macroeconomic and financial series using various estimation strategies. These factors are then included, along with standard variables measured at multiple different frequencies, in various factor‐MIDAS prediction models. Our key empirical findings as follows. (i) When using real‐time data, factor‐MIDAS prediction models outperform various linear benchmark models. Interestingly, the “MSFE‐best” MIDAS models contain no autoregressive (AR) lag terms when backcasting and nowcasting. AR terms only begin to play a role in “true” forecasting contexts. (ii) Models that utilize only one or two factors are “MSFE‐best” at all forecasting horizons, but not at any backcasting and nowcasting horizons. In these latter contexts, much more heavily parametrized models with many factors are preferred. (iii) Real‐time data are crucial for forecasting Korean gross domestic product, and the use of “first available” versus “most recent” data “strongly” affects model selection and performance. (iv) Recursively estimated models are almost always “MSFE‐best,” and models estimated using autoregressive interpolation dominate those estimated using other interpolation methods. (v) Factors estimated using recursive principal component estimation methods have more predictive content than those estimated using a variety of other (more sophisticated) approaches. This result is particularly prevalent for our “MSFE‐best” factor‐MIDAS models, across virtually all forecast horizons, estimation schemes, and data vintages that are analyzed.  相似文献   

11.
We use real‐time macroeconomic variables and combination forecasts with both time‐varying weights and equal weights to forecast inflation in the USA. The combination forecasts compare three sets of commonly used time‐varying coefficient autoregressive models: Gaussian distributed errors, errors with stochastic volatility, and errors with moving average stochastic volatility. Both point forecasts and density forecasts suggest that models combined by equal weights do not produce worse forecasts than those with time‐varying weights. We also find that variable selection, the allowance of time‐varying lag length choice, and the stochastic volatility specification significantly improve forecast performance over standard benchmarks. Finally, when compared with the Survey of Professional Forecasters, the results of the best combination model are found to be highly competitive during the 2007/08 financial crisis.  相似文献   

12.
We compare linear autoregressive (AR) models and self‐exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two‐regime SETAR process is used as the data‐generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non‐linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

13.
Four methods of model selection—equally weighted forecasts, Bayesian model‐averaged forecasts, and two models produced by the machine‐learning algorithm boosting—are applied to the problem of predicting business cycle turning points with a set of common macroeconomic variables. The methods address a fundamental problem faced by forecasters: the most useful model is simple but makes use of all relevant indicators. The results indicate that successful models of recession condition on different economic indicators at different forecast horizons. Predictors that describe real economic activity provide the clearest signal of recession at very short horizons. In contrast, signals from housing and financial markets produce the best forecasts at longer forecast horizons. A real‐time forecast experiment explores the predictability of the 2001 and 2007 recessions. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
It is investigated whether euro area variables can be forecast better based on synthetic time series for the pre‐euro period or by using just data from Germany for the pre‐euro period. Our forecast comparison is based on quarterly data for the period 1970Q1–2003Q4 for 10 macroeconomic variables. The years 2000–2003 are used as forecasting period. A range of different univariate forecasting methods is applied. Some of them are based on linear autoregressive models and we also use some nonlinear or time‐varying coefficient models. It turns out that most variables which have a similar level for Germany and the euro area such as prices can be better predicted based on German data, while aggregated European data are preferable for forecasting variables which need considerable adjustments in their levels when joining German and European Monetary Union (EMU) data. These results suggest that for variables which have a similar level for Germany and the euro area it may be reasonable to consider the German pre‐EMU data for studying economic problems in the euro area. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
Past literature casts doubt on the ability of long‐term macroeconomic forecasts to predict the direction of change. We re‐examine this issue using the Japanese GDP forecast data of 37 institutions, and find that their 16‐month‐ahead forecasts contain valuable information on whether the growth rate accelerates or not. Copyright © 2006 John Wiley _ Sons, Ltd.  相似文献   

16.
This paper addresses the issue of forecasting term structure. We provide a unified state‐space modeling framework that encompasses different existing discrete‐time yield curve models. Within such a framework we analyze the impact of two modeling choices, namely the imposition of no‐arbitrage restrictions and the size of the information set used to extract factors, on forecasting performance. Using US yield curve data, we find that both no‐arbitrage and large information sets help in forecasting but no model uniformly dominates the other. No‐arbitrage models are more useful at shorter horizons for shorter maturities. Large information sets are more useful at longer horizons and longer maturities. We also find evidence for a significant feedback from yield curve models to macroeconomic variables that could be exploited for macroeconomic forecasting. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
Several studies have tested for long‐range dependence in macroeconomic and financial time series but very few have assessed the usefulness of long‐memory models as forecast‐generating mechanisms. This study tests for fractional differencing in the US monetary indices (simple sum and divisia) and compares the out‐of‐sample fractional forecasts to benchmark forecasts. The long‐memory parameter is estimated using Robinson's Gaussian semi‐parametric and multivariate log‐periodogram methods. The evidence amply suggests that the monetary series possess a fractional order between one and two. Fractional out‐of‐sample forecasts are consistently more accurate (with the exception of the M3 series) than benchmark autoregressive forecasts but the forecasting gains are not generally statistically significant. In terms of forecast encompassing, the fractional model encompasses the autoregressive model for the divisia series but neither model encompasses the other for the simple sum series. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

18.
In this paper, we first extract factors from a monthly dataset of 130 macroeconomic and financial variables. These extracted factors are then used to construct a factor‐augmented qualitative vector autoregressive (FA‐Qual VAR) model to forecast industrial production growth, inflation, the Federal funds rate, and the term spread based on a pseudo out‐of‐sample recursive forecasting exercise over an out‐of‐sample period of 1980:1 to 2014:12, using an in‐sample period of 1960:1 to 1979:12. Short‐, medium‐, and long‐run horizons of 1, 6, 12, and 24 months ahead are considered. The forecast from the FA‐Qual VAR is compared with that of a standard VAR model, a Qual VAR model, and a factor‐augmented VAR (FAVAR). In general, we observe that the FA‐Qual VAR tends to perform significantly better than the VAR, Qual VAR and FAVAR (barring some exceptions relative to the latter). In addition, we find that the Qual VARs are also well equipped in forecasting probability of recessions when compared to probit models.  相似文献   

19.
The specification choices of vector autoregressions (VARs) in forecasting are often not straightforward, as they are complicated by various factors. To deal with model uncertainty and better utilize multiple VARs, this paper adopts the dynamic model averaging/selection (DMA/DMS) algorithm, in which forecasting models are updated and switch over time in a Bayesian manner. In an empirical application to a pool of Bayesian VAR (BVAR) models whose specifications include level and difference, along with differing lag lengths, we demonstrate that specification‐switching VARs are flexible and powerful forecast tools that yield good performance. In particular, they beat the overall best BVAR in most cases and are comparable to or better than the individual best models (for each combination of variable, forecast horizon, and evaluation metrics) for medium‐ and long‐horizon forecasts. We also examine several extensions in which forecast model pools consist of additional individual models in partial differences as well as all level/difference models, and/or time variations in VAR innovations are allowed, and discuss the potential advantages and disadvantages of such specification choices. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
We measure the performance of multi‐model inference (MMI) forecasts compared to predictions made from a single model for crude oil prices. We forecast the West Texas Intermediate (WTI) crude oil spot prices using total OECD petroleum inventory levels, surplus production capacity, the Chicago Board Options Exchange Volatility Index and an implementation of a subset autoregression with exogenous variables (SARX). Coefficient and standard error estimates obtained from SARX determined by conditioning on a single ‘best model’ ignore model uncertainty and result in underestimated standard errors and overestimated coefficients. We find that the MMI forecast outperforms a single‐model forecast for both in‐ and out‐of‐sample datasets over a variety of statistical performance measures, and further find that weighting models according to the Bayesian information criterion generally yields superior results both in and out of sample when compared to the Akaike information criterion. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号