首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 55 毫秒
1.
This study empirically examines the role of macroeconomic and stock market variables in the dynamic Nelson–Siegel framework with the purpose of fitting and forecasting the term structure of interest rate on the Japanese government bond market. The Nelson–Siegel type models in state‐space framework considerably outperform the benchmark simple time series forecast models such as an AR(1) and a random walk. The yields‐macro model incorporating macroeconomic factors leads to a better in‐sample fit of the term structure than the yields‐only model. The out‐of‐sample predictability of the former for short‐horizon forecasts is superior to the latter for all maturities examined in this study, and for longer horizons the former is still compatible to the latter. Inclusion of macroeconomic factors can dramatically reduce the autocorrelation of forecast errors, which has been a common phenomenon of statistical analysis in previous term structure models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper we compare the in‐sample fit and out‐of‐sample forecasting performance of no‐arbitrage quadratic, essentially affine and dynamic Nelson–Siegel term structure models. In total, 11 model variants are evaluated, comprising five quadratic, four affine and two Nelson–Siegel models. Recursive re‐estimation and out‐of‐sample 1‐, 6‐ and 12‐month‐ahead forecasts are generated and evaluated using monthly US data for yields observed at maturities of 1, 6, 12, 24, 60 and 120 months. Our results indicate that quadratic models provide the best in‐sample fit, while the best out‐of‐sample performance is generated by three‐factor affine models and the dynamic Nelson–Siegel model variants. Statistical tests fail to identify one single best forecasting model class. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
This paper addresses the issue of forecasting term structure. We provide a unified state‐space modeling framework that encompasses different existing discrete‐time yield curve models. Within such a framework we analyze the impact of two modeling choices, namely the imposition of no‐arbitrage restrictions and the size of the information set used to extract factors, on forecasting performance. Using US yield curve data, we find that both no‐arbitrage and large information sets help in forecasting but no model uniformly dominates the other. No‐arbitrage models are more useful at shorter horizons for shorter maturities. Large information sets are more useful at longer horizons and longer maturities. We also find evidence for a significant feedback from yield curve models to macroeconomic variables that could be exploited for macroeconomic forecasting. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
This study extends the affine dynamic Nelson–Siegel model for the inclusion of macroeconomic variables. Five macroeconomic variables are included in affine term structure model, derived under the arbitrage‐free restriction, to evaluate their role in the in‐sample fitting and out‐of‐sample forecasting of the term structure. We show that the relationship between the macroeconomic factors and yield data has an intuitive interpretation, and that there is interdependence between the yield and macroeconomic factors. Moreover, the macroeconomic factors significantly improve the forecast performance of the model. The affine Nelson–Siegel type models outperform the benchmark simple time series forecast models. The out‐of‐sample predictability of the affine Nelson–Siegel model with macroeconomic factors for the short horizon is superior to the simple affine yield model for all maturities, and for longer horizons the former is still compatible to the latter, particularly for medium and long maturities. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

6.
We consider the linear time‐series model yt=dt+ut(t=1,...,n), where dt is the deterministic trend and ut the stochastic term which follows an AR(1) process; ut=θut−1t, with normal innovations ϵt. Various assumptions about the start‐up will be made. Our main interest lies in the behaviour of the l‐period‐ahead forecast yn+1 near θ=1. Unlike in other studies of the AR(1) unit root process, we do not wish to ask the question whether θ=1 but are concerned with the behaviour of the forecast estimate near and at θ=1. For this purpose we define the sth (s=1,2) order sensitivity measure λl(s) of the forecast yn+1 near θ=1. This measures the sensitivity of the forecast at the unit root. In this study we consider two deterministic trends: dtt and dtttt. The forecast will be the Best Linear Unbiased forecast. We show that, when dtt, the number of observations has no effect on forecast sensitivity. When the deterministic trend is linear, the sensitivity is zero. We also develop a large‐sample procedure to measure the forecast sensitivity when we are uncertain whether to include the linear trend. Our analysis suggests that, depending on the initial conditions, it is better to include a linear trend for reduced sensitivity of the medium‐term forecast. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

7.
A large number of models have been developed in the literature to analyze and forecast changes in output dynamics. The objective of this paper was to compare the predictive ability of univariate and bivariate models, in terms of forecasting US gross national product (GNP) growth at different forecasting horizons, with the bivariate models containing information on a measure of economic uncertainty. Based on point and density forecast accuracy measures, as well as on equal predictive ability (EPA) and superior predictive ability (SPA) tests, we evaluate the relative forecasting performance of different model specifications over the quarterly period of 1919:Q2 until 2014:Q4. We find that the economic policy uncertainty (EPU) index should improve the accuracy of US GNP growth forecasts in bivariate models. We also find that the EPU exhibits similar forecasting ability to the term spread and outperforms other uncertainty measures such as the volatility index and geopolitical risk in predicting US recessions. While the Markov switching time‐varying parameter vector autoregressive model yields the lowest values for the root mean squared error in most cases, we observe relatively low values for the log predictive density score, when using the Bayesian vector regression model with stochastic volatility. More importantly, our results highlight the importance of uncertainty in forecasting US GNP growth rates.  相似文献   

8.
We examine different approaches to forecasting monthly US employment growth in the presence of many potentially relevant predictors. We first generate simulated out‐of‐sample forecasts of US employment growth at multiple horizons using individual autoregressive distributed lag (ARDL) models based on 30 potential predictors. We then consider different methods from the extant literature for combining the forecasts generated by the individual ARDL models. Using the mean square forecast error (MSFE) metric, we investigate the performance of the forecast combining methods over the last decade, as well as five periods centered on the last five US recessions. Overall, our results show that a number of combining methods outperform a benchmark autoregressive model. Combining methods based on principal components exhibit the best overall performance, while methods based on simple averaging, clusters, and discount MSFE also perform well. On a cautionary note, some combining methods, such as those based on ordinary least squares, often perform quite poorly. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, we consider a combined forecast using an optimal combination weight in a generalized autoregression framework. The generalized autoregression provides not only a combined forecast but also an optimal combination weight for combining forecasts. By simulation, we find that short‐ and medium‐horizon (as well as partly long‐horizon) forecasts from the generalized autoregression using the optimal combination weight are more efficient than those from the usual autoregression in terms of the mean‐squared forecast error. An empirical application with US gross domestic product confirms the simulation result. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper we adopt a principal components analysis (PCA) to reduce the dimensionality of the term structure and employ autoregressive (AR) models to forecast principal components which, in turn, are used to forecast swap rates. Arguing in favour of structural variation, we propose data‐driven, adaptive model selection strategies based on the PCA/AR model. To evaluate ex ante forecasting performance for particular rates, distinct forecast features, such as mean squared errors, directional accuracy and directional forecast value, are considered. It turns out that, relative to benchmark models, the adaptive approach offers additional forecast accuracy in terms of directional accuracy and directional forecast value. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper, we detect and correct abnormal returns in 17 French stocks returns and the French index CAC40 from additive‐outlier detection method in GARCH models developed by Franses and Ghijsels (1999) and extended to innovative outliers by Charles and Darné (2005). We study the effects of outlying observations on several popular econometric tests. Moreover, we show that the parameters of the equation governing the volatility dynamics are biased when we do not take into account additive and innovative outliers. Finally, we show that the volatility forecast is better when the data are cleaned of outliers for several step‐ahead forecasts (short, medium‐ and long‐term) even if we consider a GARCH‐t process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
The period of extraordinary volatility in euro area headline inflation starting in 2007 raised the question whether forecast combination methods can be used to hedge against bad forecast performance of single models during such periods and provide more robust forecasts. We investigate this issue for forecasts from a range of short‐term forecasting models. Our analysis shows that there is considerable variation of the relative performance of the different models over time. To take that into account we suggest employing performance‐based forecast combination methods—in particular, one with more weight on the recent forecast performance. We compare such an approach with equal forecast combination that has been found to outperform more sophisticated forecast combination methods in the past, and investigate whether it can improve forecast accuracy over the single best model. The time‐varying weights assign weights to the economic interpretations of the forecast stemming from different models. We also include a number of benchmark models in our analysis. The combination methods are evaluated for HICP headline inflation and HICP excluding food and energy. We investigate how forecast accuracy of the combination methods differs between pre‐crisis times, the period after the global financial crisis and the full evaluation period, including the global financial crisis with its extraordinary volatility in inflation. Overall, we find that forecast combination helps hedge against bad forecast performance and that performance‐based weighting outperforms simple averaging. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

13.
Previous research found that the US business cycle leads the European one by a few quarters, and can therefore be useful in predicting euro area gross domestic product (GDP). In this paper we investigate whether additional predictive power can be gained by adding selected financial variables belonging to either the USA or the euro area. We use vector autoregressions (VARs) that include the US and euro area GDPs as well as growth in the Rest of the World and selected combinations of financial variables. Out‐of‐sample root mean square forecast errors (RMSEs) evidence that adding financial variables produces a slightly smaller error in forecasting US economic activity. This weak macro‐financial linkage is even weaker in the euro area, where financial indicators do not improve short‐ and medium‐term GDP forecasts even when their timely availability relative to GDP is exploited. It can be conjectured that neither US nor European financial variables help predict euro area GDP as the US GDP has already embodied this information. However, we show that the finding that financial variables have no predictive power for future activity in the euro area relates to the unconditional nature of the RMSE metric. When forecasting ability is assessed as if in real time (i.e. conditionally on the information available at the time when forecasts are made), we find that models using financial variables would have been preferred in several episodes and in particular between 1999 and 2002. Copyright 2011 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, I use a large set of macroeconomic and financial predictors to forecast US recession periods. I adopt Bayesian methodology with shrinkage in the parameters of the probit model for the binary time series tracking the state of the economy. The in‐sample and out‐of‐sample results show that utilizing a large cross‐section of indicators yields superior US recession forecasts in comparison to a number of parsimonious benchmark models. Moreover, the data‐rich probit model gives similar accuracy to the factor‐based model for the 1‐month‐ahead forecasts, while it provides superior performance for 1‐year‐ahead predictions. Finally, in a pseudo‐real‐time application for the Great Recession, I find that the large probit model with shrinkage is able to pick up the recession signals in a timely fashion and does well in comparison to the more parsimonious specification and to nonparametric alternatives. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
We develop a semi‐structural model for forecasting inflation in the UK in which the New Keynesian Phillips curve (NKPC) is augmented with a time series model for marginal cost. By combining structural and time series elements we hope to reap the benefits of both approaches, namely the relatively better forecasting performance of time series models in the short run and a theory‐consistent economic interpretation of the forecast coming from the structural model. In our model we consider the hybrid version of the NKPC and use an open‐economy measure of marginal cost. The results suggest that our semi‐structural model performs better than a random‐walk forecast and most of the competing models (conventional time series models and strictly structural models) only in the short run (one quarter ahead) but it is outperformed by some of the competing models at medium and long forecast horizons (four and eight quarters ahead). In addition, the open‐economy specification of our semi‐structural model delivers more accurate forecasts than its closed‐economy alternative at all horizons. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
This paper analyses the size and nature of the errors in GDP forecasts in the G7 countries from 1971 to 1995. These GDP short‐term forecasts are produced by the Organization for Economic Cooperation and Development and by the International Monetary Fund, and published twice a year in the Economic Outlook and in the World Economic Outlook, respectively. The evaluation of the accuracy of the forecasts is based on the properties of the difference between the realization and the forecast. A forecast is considered to be accurate if it is unbiased and efficient. A forecast is unbiased if its average deviation from the outcome is zero, and it is efficient if it reflects all the information that is available at the time the forecast is made. Finally, we also examine tests of directional accuracy and offer a non‐parametric method of assessment. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

17.
Following recent non‐linear extensions of the present‐value model, this paper examines the out‐of‐sample forecast performance of two parametric and two non‐parametric nonlinear models of stock returns. The parametric models include the standard regime switching and the Markov regime switching, whereas the non‐parametric are the nearest‐neighbour and the artificial neural network models. We focused on the US stock market using annual observations spanning the period 1872–1999. Evaluation of forecasts was based on two criteria, namely forecast accuracy and forecast encompassing. In terms of accuracy, the Markov and the artificial neural network models produce at least as accurate forecasts as the other models. In terms of encompassing, the Markov model outperforms all the others. Overall, both criteria suggest that the Markov regime switching model is the most preferable non‐linear empirical extension of the present‐value model for out‐of‐sample stock return forecasting. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short‐term interest rates from October 2008. Out‐of‐sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson–Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium‐ to longer‐term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near‐zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson–Siegel models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
Using quantile regression this paper explores the predictability of the stock and bond return distributions as a function of economic state variables. The use of quantile regression allows us to examine specific parts of the return distribution such as the tails and the center, and for a sufficiently fine grid of quantiles we can trace out the entire distribution. A univariate quantile regression model is used to examine the marginal stock and bond return distributions, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that economic state variables predict the stock and bond return distributions in quite different ways in terms of, for example, location shifts, volatility and skewness. Comparing the different economic state variables in terms of their out‐of‐sample forecasting performance, the empirical analysis also shows that the relative accuracy of the state variables varies across the return distribution. Density forecasts based on an assumed normal distribution with forecasted mean and variance is compared to forecasts based on quantile estimates and, in general, the latter yields the best performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
The short end of the yield curve incorporates essential information to forecast central banks' decisions, but in a biased manner. This article proposes a new method to forecast the Fed and the European Central Bank's decision rate by correcting the swap rates for their cyclical economic premium, using an affine term structure model. The corrected yields offer a higher out‐of‐sample forecasting power than the yields themselves. They also deliver forecasts that are either comparable or better than those obtained with a factor‐augmented vector autoregressive model, underlining the fact that yields are likely to contain at least as much information regarding monetary policy as a dataset composed of economic data series. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号