首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper models bond term premia empirically in terms of the maturity composition of the federal debt and other observable economic variables in a time‐varying framework with potential regime shifts. We present regression and out‐of sample forecasting results demonstrating that information on the age composition of the Federal debt is useful for forecasting term premia. We show that the multiprocess mixture model, a multi‐state time‐varying parameter model, outperforms the commonly used GARCH model in out‐of‐sample forecasts of term premia. The results underscore the importance of modelling term premia, as a function of economic variables rather than just as a function of asset covariances as in the conditional heteroscedasticity models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
We evaluate forecasting models of US business fixed investment spending growth over the recent 1995:1–2004:2 out‐of‐sample period. The forecasting models are based on the conventional Accelerator, Neoclassical, Average Q, and Cash‐Flow models of investment spending, as well as real stock prices and excess stock return predictors. The real stock price model typically generates the most accurate forecasts, and forecast‐encompassing tests indicate that this model contains most of the information useful for forecasting investment spending growth relative to the other models at longer horizons. In a robustness check, we also evaluate the forecasting performance of the models over two alternative out‐of‐sample periods: 1975:1–1984:4 and 1985:1–1994:4. A number of different models produce the most accurate forecasts over these alternative out‐of‐sample periods, indicating that while the real stock price model appears particularly useful for forecasting the recent behavior of investment spending growth, it may not continue to perform well in future periods. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

3.
While in speculative markets forward prices could be regarded as natural predictors for future spot rates, empirically, forward prices often fail to indicate ex ante the direction of price movements. In terms of forecasting, the random walk approximation of speculative prices has been established to provide ‘naive’ predictors that are most difficult to outperform by both purely backward‐looking time series models and more structural approaches processing information from forward markets. We empirically assess the implicit predictive content of forward prices by means of wavelet‐based prediction of two foreign exchange (FX) rates and the price of Brent oil quoted either in US dollars or euros. Essentially, wavelet‐based predictors are smoothed auxiliary (padded) time series quotes that are added to the sample information beyond the forecast origin. We compare wavelet predictors obtained from padding with constant prices (i.e. random walk predictors) and forward prices. For the case of FX markets, padding with forward prices is more effective than padding with constant prices, and, moreover, respective wavelet‐based predictors outperform purely backward‐looking time series approaches (ARIMA). For the case of Brent oil quoted in US dollars, wavelet‐based predictors do not signal predictive content of forward prices for future spot prices. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
We propose two methods of equity premium prediction with single and multiple predictors respectively and evaluate their out‐of‐sample performance using US stock data with 15 popular predictors for equity premium prediction. The first method defines three scenarios in terms of the expected returns under the scenarios and assumes a Markov chain governing the occurrence of the scenarios over time. It employs predictive quantile regressions of excess return on a predictor for three quantiles to estimate the occurrence of the scenarios over an in‐sample period and the transition probabilities of the Markov chain, predicts the expected returns under the scenarios, and generates an equity premium forecast by combining the predicted expected returns under three scenarios with the estimated transition probabilities. The second method generates an equity premium forecast by combining the individual forecasts from the first method across all predictors. For most of predictors, the first method outperforms the benchmark method of historical average and the traditional predictive linear regression with a single predictor both statistically and economically, and the second method consistently performs better than several competing methods used in the literature. The performance of our methods is further examined under different scenarios and economic conditions, and is robust for two different out‐of‐sample periods and specifications of the scenarios.  相似文献   

5.
The short end of the yield curve incorporates essential information to forecast central banks' decisions, but in a biased manner. This article proposes a new method to forecast the Fed and the European Central Bank's decision rate by correcting the swap rates for their cyclical economic premium, using an affine term structure model. The corrected yields offer a higher out‐of‐sample forecasting power than the yields themselves. They also deliver forecasts that are either comparable or better than those obtained with a factor‐augmented vector autoregressive model, underlining the fact that yields are likely to contain at least as much information regarding monetary policy as a dataset composed of economic data series. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
We investigate the accuracy of capital investment predictors from a national business survey of South African manufacturing. Based on data available to correspondents at the time of survey completion, we propose variables that might inform the confidence that can be attached to their predictions. Having calibrated the survey predictors' directional accuracy, we model the probability of a correct directional prediction using logistic regression with the proposed variables. For point forecasting, we compare the accuracy of rescaled survey forecasts with time series benchmarks and some survey/time series hybrid models. In addition, using the same set of variables, we model the magnitude of survey prediction errors. Directional forecast tests showed that three out of four survey predictors have value but are biased and inefficient. For shorter horizons we found that survey forecasts, enhanced by time series data, significantly improved point forecasting accuracy. For longer horizons the survey predictors were at least as accurate as alternatives. The usefulness of the more accurate of the predictors examined is enhanced by auxiliary information, namely the probability of directional accuracy and the estimated error magnitude.  相似文献   

7.
In this article, we propose a regression model for sparse high‐dimensional data from aggregated store‐level sales data. The modeling procedure includes two sub‐models of topic model and hierarchical factor regressions. These are applied in sequence to accommodate high dimensionality and sparseness and facilitate managerial interpretation. First, the topic model is applied to aggregated data to decompose the daily aggregated sales volume of a product into sub‐sales for several topics by allocating each unit sale (“word” in text analysis) in a day (“document”) into a topic based on joint‐purchase information. This stage reduces the dimensionality of data inside topics because the topic distribution is nonuniform and product sales are mostly allocated into smaller numbers of topics. Next, the market response regression model for the topic is estimated from information about items in the same topic. The hierarchical factor regression model we introduce, based on canonical correlation analysis for original high‐dimensional sample spaces, further reduces the dimensionality within topics. Feature selection is then performed on the basis of the credible interval of the parameters' posterior density. Empirical results show that (i) our model allows managerial implications from topic‐wise market responses according to the particular context, and (ii) it performs better than do conventional category regressions in both in‐sample and out‐of‐sample forecasts.  相似文献   

8.
Using the generalized dynamic factor model, this study constructs three predictors of crude oil price volatility: a fundamental (physical) predictor, a financial predictor, and a macroeconomic uncertainty predictor. Moreover, an event‐triggered predictor is constructed using data extracted from Google Trends. We construct GARCH‐MIDAS (generalized autoregressive conditional heteroskedasticity–mixed‐data sampling) models combining realized volatility with the predictors to predict oil price volatility at different forecasting horizons. We then identify the predictive power of the realized volatility and the predictors by the model confidence set (MCS) test. The findings show that, among the four indexes, the financial predictor has the most predictive power for crude oil volatility, which provides strong evidence that financialization has been the key determinant of crude oil price behavior since the 2008 global financial crisis. In addition, the fundamental predictor, followed by the financial predictor, effectively forecasts crude oil price volatility in the long‐run forecasting horizons. Our findings indicate that the different predictors can provide distinct predictive information at the different horizons given the specific market situation. These findings have useful implications for market traders in terms of managing crude oil price risk.  相似文献   

9.
10.
We propose a quantile regression approach to equity premium forecasting. Robust point forecasts are generated from a set of quantile forecasts using both fixed and time‐varying weighting schemes, thereby exploiting the entire distributional information associated with each predictor. Further gains are achieved by incorporating the forecast combination methodology into our quantile regression setting. Our approach using a time‐varying weighting scheme delivers statistically and economically significant out‐of‐sample forecasts relative to both the historical average benchmark and the combined predictive mean regression modeling approach. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock Exchange Composite Index and the five sectoral stock indices in Chinese stock markets for the period 4 January 2000 to 30 December 2011. We then conduct both in‐sample tests and out‐of‐sample forecasts to examine the effects of structural breaks on the performance of ARFIMAX‐FIGARCH models for the realized volatility forecast by utilizing a variety of estimation window sizes designed to accommodate potential structural breaks. The results of the in‐sample tests show that there are multiple breaks in all realized volatility series. The results of the out‐of‐sample point forecasts indicate that the combination forecasts with time‐varying weights across individual forecast models estimated with different estimation windows perform well. In particular, nonlinear combination forecasts with the weights chosen based on a non‐parametric kernel regression and linear combination forecasts with the weights chosen based on the non‐negative restricted least squares and Schwarz information criterion appear to be the most accurate methods in point forecasting for realized volatility under structural breaks. We also conduct an interval forecast of the realized volatility for the combination approaches, and find that the interval forecast for nonlinear combination approaches with the weights chosen according to a non‐parametric kernel regression performs best among the competing models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
This study investigates the forecasting performance of the GARCH(1,1) model by adding an effective covariate. Based on the assumption that many volatility predictors are available to help forecast the volatility of a target variable, this study shows how to construct a covariate from these predictors and plug it into the GARCH(1,1) model. This study presents a method of building a covariate such that the covariate contains the maximum possible amount of predictor information of the predictors for forecasting volatility. The loading of the covariate constructed by the proposed method is simply the eigenvector of a matrix. The proposed method enjoys the advantages of easy implementation and interpretation. Simulations and empirical analysis verify that the proposed method performs better than other methods for forecasting the volatility, and the results are quite robust to model misspecification. Specifically, the proposed method reduces the mean square error of the GARCH(1,1) model by 30% for forecasting the volatility of S&P 500 Index. The proposed method is also useful in improving the volatility forecasting of several GARCH‐family models and for forecasting the value‐at‐risk. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
Including disaggregate variables or using information extracted from the disaggregate variables into a forecasting model for an economic aggregate may improve forecasting accuracy. In this paper we suggest using the boosting method to select the disaggregate variables, which are most helpful in predicting an aggregate of interest. We conduct a simulation study to investigate the variable selection ability of this method. To assess the forecasting performance a recursive pseudo‐out‐of‐sample forecasting experiment for six key euro area macroeconomic variables is conducted. The results suggest that using boosting to select relevant predictors is a feasible and competitive approach in forecasting an aggregate. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
We investigate the forecasting ability of the most commonly used benchmarks in financial economics. We approach the usual caveats of probabilistic forecasts studies—small samples, limited models, and nonholistic validations—by performing a comprehensive comparison of 15 predictive schemes during a time period of over 21 years. All densities are evaluated in terms of their statistical consistency, local accuracy and forecasting errors. Using a new composite indicator, the integrated forecast score, we show that risk‐neutral densities outperform historical‐based predictions in terms of information content. We find that the variance gamma model generates the highest out‐of‐sample likelihood of observed prices and the lowest predictive errors, whereas the GARCH‐based GJR‐FHS delivers the most consistent forecasts across the entire density range. In contrast, lognormal densities, the Heston model, or the nonparametric Breeden–Litzenberger formula yield biased predictions and are rejected in statistical tests.  相似文献   

15.
This article addresses the problem of forecasting time series that are subject to level shifts. Processes with level shifts possess a nonlinear dependence structure. Using the stochastic permanent breaks (STOPBREAK) model, I model this nonlinearity in a direct and flexible way that avoids imposing a discrete regime structure. I apply this model to the rate of price inflation in the United States, which I show is subject to level shifts. These shifts significantly affect the accuracy of out‐of‐sample forecasts, causing models that assume covariance stationarity to be substantially biased. Models that do not assume covariance stationarity, such as the random walk, are unbiased but lack precision in periods without shifts. I show that the STOPBREAK model outperforms several alternative models in an out‐of‐sample inflation forecasting experiment. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
This paper concerns the exploration of statistical models for the analysis of observational freeway flow data, and the development of empirical models to capture and predict short‐term changes in traffic flow characteristics on sequences of links in a partially detectorized freeway network. A first set of analyses explores regression models for minute‐by‐minute traffic flows, taking into account time of day, day of the week, and recent upstream detector‐based flows. Day‐ and link‐specific random effects are used in a hierarchical statistical modelling framework. A second set of analyses captures day‐specific idiosyncrasies in traffic patterns by including parameters that may vary throughout the day. Model fit and short‐term predictions of flows are thus improved significantly. A third set of analyses includes recent downstream flows as additional predictors. These further improvements, though marginal in most cases, can be quite radically useful in cases of very marked breakdown of freeway flows on some links. These three modelling stages are described and developed in analyses of observational flow data from a set of links on Interstate Highway 5 (I‐5) near Seattle. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

17.
A variety of recent studies provide a skeptical view on the predictability of stock returns. Empirical evidence shows that most prediction models suffer from a loss of information, model uncertainty, and structural instability by relying on low‐dimensional information sets. In this study, we evaluate the predictive ability of various lately refined forecasting strategies, which handle these issues by incorporating information from many potential predictor variables simultaneously. We investigate whether forecasting strategies that (i) combine information and (ii) combine individual forecasts are useful to predict US stock returns, that is, the market excess return, size, value, and the momentum premium. Our results show that methods combining information have remarkable in‐sample predictive ability. However, the out‐of‐sample performance suffers from highly volatile forecast errors. Forecast combinations face a better bias–efficiency trade‐off, yielding a consistently superior forecast performance for the market excess return and the size premium even after the 1970s.  相似文献   

18.
For predicting forward default probabilities of firms, the discrete‐time forward hazard model (DFHM) is proposed. We derive maximum likelihood estimates for the parameters in DFHM. To improve its predictive power in practice, we also consider an extension of DFHM by replacing its constant coefficients of firm‐specific predictors with smooth functions of macroeconomic variables. The resulting model is called the discrete‐time varying‐coefficient forward hazard model (DVFHM). Through local maximum likelihood analysis, DVFHM is shown to be a reliable and flexible model for forward default prediction. We use real panel datasets to illustrate these two models. Using an expanding rolling window approach, our empirical results confirm that DVFHM has better and more robust out‐of‐sample performance on forward default prediction than DFHM, in the sense of yielding more accurate predicted numbers of defaults and predicted survival times. Thus DVFHM is a useful alternative for studying forward default losses in portfolios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
The paper develops an oil price forecasting technique which is based on the present value model of rational commodity pricing. The approach suggests shifting the forecasting problem to the marginal convenience yield, which can be derived from the cost‐of‐carry relationship. In a recursive out‐of‐sample analysis, forecast accuracy at horizons within one year is checked by the root mean squared error as well as the mean error and the frequency of a correct direction‐of‐change prediction. For all criteria employed, the proposed forecasting tool outperforms the approach of using futures prices as direct predictors of future spot prices. Vis‐à‐vis the random‐walk model, it does not significantly improve forecast accuracy but provides valuable statements on the direction of change. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

20.
Forecasting with many predictors provides the opportunity to exploit a much richer base of information. However, macroeconomic time series are typically rather short, raising problems for conventional econometric models. This paper explores the use of Bayesian additive regression trees (Bart) from the machine learning literature to forecast macroeconomic time series in a predictor‐rich environment. The interest lies in forecasting nine key macroeconomic variables of interest for government budget planning, central bank policy making and business decisions. It turns out that Bart is a valuable addition to existing methods for handling high dimensional data sets in a macroeconomic context.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号