首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

2.
This paper proposes a theory to explain why some forecasting organizations institutionalize forecast accuracy evaluation while others do not. The theory considers internal and external aspects of managerial, political, and procedural factors as they affect forecasting organizations. The theory is then tested using data from a survey of the US Federal Forecasters Group. Though some support for the theory is developed, multiple alternative explanations for results and the ‘public’ nature of the sample organizations prevent wide-scale generalization. The results suggest that larger organizations are more likely to have some form of forecast evaluation than smaller units. The institutionalization of forecast accuracy evaluation is closely linked to internal managerial and procedural factors, while external political pressure tends to reduce the likelihood of institutionalization of evaluation of forecast accuracy.© 1997 John Wiley & Sons, Ltd.  相似文献   

3.
In practical econometric forecasting exercises, incomplete data on current and immediate past values of endogenous variables are available. This paper considers various approaches to this ‘ragged edge’ problem, including the common device of treating as ‘temporarily exogenous’ an endogenous variable whose value is known, by deleting it from the set of endogenous variables for whose forecast values the model is solved and suppressing the corresponding structural equation. It is seen that this forecast can be adjusted to coincide with the optimal forecast. The initial discussion concerns the textbook linear simultaneous equation model; extensions to non-linear dynamic models are described.  相似文献   

4.
This paper compares the predictive ability of ARIMA models in forecasting sales revenue. Comparisons were made at both industry and firm levels. With respect to the form of the ARIMA model, a parsimonious model of the form (0, 1, 1) (0, 1, 1) was identified most frequently for firms and industries. This model was identified previously by Griffin and Watts for the earnings series, and by Moriarty and Adams for the sales series. As a parsimonious model, its predictive accuracy was quite good. However, predictive accuracy was also found to be a function of the industry. Out of the eleven industry classifications, ‘metals’ had the lowest predictive accuracy using both firmspecific and industry-specific ARIMA models.  相似文献   

5.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

6.
This paper shows how monthly data and forecasts can be used in a systematic way to improve the predictive accuracy of a quarterly macroeconometric model. The problem is formulated as a model pooling procedure (equivalent to non-recursive Kalman filtering) where a baseline quarterly model forecast is modified through ‘add-factors’ or ‘constant adjustments’. The procedure ‘automatically’ constructs these adjustments in a covariance-minimizing fashion to reflect the revised expectation of the quarterly model's forecast errors, conditional on the monthly information set. Results obtained using Federal Reserve Board models indicate the potential for significant reduction in forecast error variance through application of these procedures.  相似文献   

7.
回声状态网络(ESN)相比传统递归神经网络,具有模型简单、参数训练速度快的特点.针对标准ESN因常采用线性回归率定模型参数容易出现过拟合问题,提出了基于贝叶斯回声状态网络(BESN)的日径流预报模型.该模型将贝叶斯理论与ESN模型相结合,通过权重后验概率密度最大化而获得最优输出权重,提高了模型的泛化能力.通过安砂和新丰江两座水库日径流预测实例表明,BESN模型是一种有效、可行的预测方法,与传统BP神经网络和ESN模型对比,进一步表明BESN模型具有更好的预测精度.  相似文献   

8.
A number of researchers have developed models that use test market data to generate forecasts of a new product's performance. However, most of these models have ignored the effects of marketing covariates. In this paper we examine what impact these covariates have on a model's forecasting performance and explore whether their presence enables us to reduce the length of the model calibration period (i.e. shorten the duration of the test market). We develop from first principles a set of models that enable us to systematically explore the impact of various model ‘components’ on forecasting performance. Furthermore, we also explore the impact of the length of the test market on forecasting performance. We find that it is critically important to capture consumer heterogeneity, and that the inclusion of covariate effects can improve forecast accuracy, especially for models calibrated on fewer than 20 weeks of data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

9.
We present a mixed‐frequency model for daily forecasts of euro area inflation. The model combines a monthly index of core inflation with daily data from financial markets; estimates are carried out with the MIDAS regression approach. The forecasting ability of the model in real time is compared with that of standard VARs and of daily quotes of economic derivatives on euro area inflation. We find that the inclusion of daily variables helps to reduce forecast errors with respect to models that consider only monthly variables. The mixed‐frequency model also displays superior predictive performance with respect to forecasts solely based on economic derivatives. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
We investigate the forecasting ability of the most commonly used benchmarks in financial economics. We approach the usual caveats of probabilistic forecasts studies—small samples, limited models, and nonholistic validations—by performing a comprehensive comparison of 15 predictive schemes during a time period of over 21 years. All densities are evaluated in terms of their statistical consistency, local accuracy and forecasting errors. Using a new composite indicator, the integrated forecast score, we show that risk‐neutral densities outperform historical‐based predictions in terms of information content. We find that the variance gamma model generates the highest out‐of‐sample likelihood of observed prices and the lowest predictive errors, whereas the GARCH‐based GJR‐FHS delivers the most consistent forecasts across the entire density range. In contrast, lognormal densities, the Heston model, or the nonparametric Breeden–Litzenberger formula yield biased predictions and are rejected in statistical tests.  相似文献   

11.
Forecast combination based on a model selection approach is discussed and evaluated. In addition, a combination approach based on ex ante predictive ability is outlined. The model selection approach which we examine is based on the use of Schwarz (SIC) or the Akaike (AIC) Information Criteria. Monte Carlo experiments based on combination forecasts constructed using possibly (misspecified) models suggest that the SIC offers a potentially useful combination approach, and that further investigation is warranted. For example, combination forecasts from a simple averaging approach MSE‐dominate SIC combination forecasts less than 25% of the time in most cases, while other ‘standard’ combination approaches fare even worse. Alternative combination approaches are also compared by conducting forecasting experiments using nine US macroeconomic variables. In particular, artificial neural networks (ANN), linear models, and professional forecasts are used to form real‐time forecasts of the variables, and it is shown via a series of experiments that SIC, t‐statistic, and averaging combination approaches dominate various other combination approaches. An additional finding is that while ANN models may not MSE‐dominate simpler linear models, combinations of forecasts from these two models outperform either individual forecast, for a subset of the economic variables examined. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

12.
Using a structural time‐series model, the forecasting accuracy of a wide range of macroeconomic variables is investigated. Specifically of importance is whether the Henderson moving‐average procedure distorts the underlying time‐series properties of the data for forecasting purposes. Given the weight of attention in the literature to the seasonal adjustment process used by various statistical agencies, this study hopes to address the dearth of literature on ‘trending’ procedures. Forecasts using both the trended and untrended series are generated. The forecasts are then made comparable by ‘detrending’ the trended forecasts, and comparing both series to the realised values. Forecasting accuracy is measured by a suite of common methods, and a test of significance of difference is applied to the respective root mean square errors. It is found that the Henderson procedure does not lead to deterioration in forecasting accuracy in Australian macroeconomic variables on most occasions, though the conclusions are very different between the one‐step‐ahead and multi‐step‐ahead forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
This study reports the results of an experiment that examines (1) the effects of forecast horizon on the performance of probability forecasters, and (2) the alleged existence of an inverse expertise effect, i.e., an inverse relationship between expertise and probabilistic forecasting performance. Portfolio managers are used as forecasters with substantive expertise. Performance of this ‘expert’ group is compared to the performance of a ‘semi-expert’ group composed of other banking professionals trained in portfolio management. It is found that while both groups attain their best discrimination performances in the four-week forecast horizon, they show their worst calibration and skill performances in the 12-week forecast horizon. Also, while experts perform better in all performance measures for the one-week horizon, semi-experts achieve better calibration for the four-week horizon. It is concluded that these results may signal the existence of an inverse expertise effect that is contingent on the selected forecast horizon.  相似文献   

14.
This paper presents the writer's experience, over a period of 25 years, in analysing organizational systems and, in particular, concentrates on the overall forecasting activity. The paper first looks at the relationship between forecasting and decision taking–with emphasis on the fact that forecasting is a means to aid decision taking and not an end in itself. It states that there are many types of forecasting problems, each requiring different methods of treatment. The paper then discusses attitudes which are emerging about the relative advantages of different forecasting techniques. It suggests a model building process which requires‘experience’and‘craftsmanship’, extensive practical application, frequent interaction between theory and practice and a methodology that eventually leads to models that contain no detectable inadequacies. Furthermore, it argues that although models which forecast a time series from its past history have a very important role to play, for effective policy making it is necessary to augment the model by introducing policy variables, again in a systematic not an ‘ad hoc’ manner. Finally, the paper discusses how forecasting systems can be introduced into the management process in the first place and how they should be monitored and updated when found wanting.  相似文献   

15.
From the editors     
Political forecasting provides the contextuality needed for decision-making and for forecasting ‘non-political’ trends. To gear political forecasting to these needs, rather than mimicking approaches in other areas, requires recognition of the distinctive nature of political trends, and realism regarding forecast uses, which generally do not benefit from ‘precise’ probabilities, predictions of only major events, or ‘sophisticated’ methodology that sacrifices comprehensiveness for explicitness. Approaches borrowed from other forecasting disciplines have been counterproductive, although contextual approaches, including cross-impact analyses and developmental constructs that integrate political and non-political trends, are promising. Explorations of the consistency of scenario dynamics, taking into account policy responses and non-formalizable complexity, are also useful. Thus the separation of political forecasting from political analysis should be minimized, calling for a redirection of effort away from developing methodology uniquely geared to forecasting, and towards organizing more comprehensive and systematic analytical efforts.  相似文献   

16.
When evaluating the launch of a new product or service, forecasts of the diffusion path and the effects of the marketing mix are critically important. Currently no unified framework exists to provide guidelines on the inclusion and specification of marketing mix variables into models of innovation diffusion. The objective of this research is to examine empirically the role of prices in diffusion models, in order to establish whether price can be incorporated effectively into the simpler time-series models. Unlike existing empirical research which examines the models' fit to historical data, we examine the predictive validity of alternative models. Only if the incorporation of prices improves the predictive performance of diffusion models can it be argued that these models have validity. A series of diffusion models which include prices are compared against a number of well-accepted diffusion models, including the Bass (1969) model, and more recently developed ‘flexible’ diffusion models. For short data series and long-lead time forecasting, the situation typical of practical situations, price rarely added to the forecasting capability of simpler time-series models. Copyright © 1998 John Wiley & Sons, Ltd.  相似文献   

17.
This paper investigates the effects of imposing invalid cointegration restrictions or ignoring valid ones on the estimation, testing and forecasting properties of the bivariate, first‐order, vector autoregressive (VAR(1)) model. We first consider nearly cointegrated VARs, that is, stable systems whose largest root, lmax, lies in the neighborhood of unity, while the other root, lmin, is safely smaller than unity. In this context, we define the ‘forecast cost of type I’ to be the deterioration in the forecasting accuracy of the VAR model due to the imposition of invalid cointegration restrictions. However, there are cases where misspecification arises for the opposite reasons, namely from ignoring cointegration when the true process is, in fact, cointegrated. Such cases can arise when lmax equals unity and lmin is less than but near to unity. The effects of this type of misspecification on forecasting will be referred to as ‘forecast cost of type II’. By means of Monte Carlo simulations, we measure both types of forecast cost in actual situations, where the researcher is led (or misled) by the usual unit root tests in choosing the unit root structure of the system. We consider VAR(1) processes driven by i.i.d. Gaussian or GARCH innovations. To distinguish between the effects of nonlinear dependence and those of leptokurtosis, we also consider processes driven by i.i.d. t(2) innovations. The simulation results reveal that the forecast cost of imposing invalid cointegration restrictions is substantial, especially for small samples. On the other hand, the forecast cost of ignoring valid cointegration restrictions is small but not negligible. In all the cases considered, both types of forecast cost increase with the intensity of GARCH effects. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
This study explores the nature of information conveyed by 14 error measures drawn from the literature, using real-life forecasting data from 691 individual product items over six quarterly periods. Principal components analysis is used to derive factor solutions that are subsequently compared for two forecasting methods, a version of Holt's exponential smoothing, and the random walk model (Naive 1). The results reveal four underlying forecast error dimensions that are stable across the two factor solutions. The potentially confounding influence of sales volume on the derived error dimensions is also explored via correlation analysis.  相似文献   

19.
A large number of models have been developed in the literature to analyze and forecast changes in output dynamics. The objective of this paper was to compare the predictive ability of univariate and bivariate models, in terms of forecasting US gross national product (GNP) growth at different forecasting horizons, with the bivariate models containing information on a measure of economic uncertainty. Based on point and density forecast accuracy measures, as well as on equal predictive ability (EPA) and superior predictive ability (SPA) tests, we evaluate the relative forecasting performance of different model specifications over the quarterly period of 1919:Q2 until 2014:Q4. We find that the economic policy uncertainty (EPU) index should improve the accuracy of US GNP growth forecasts in bivariate models. We also find that the EPU exhibits similar forecasting ability to the term spread and outperforms other uncertainty measures such as the volatility index and geopolitical risk in predicting US recessions. While the Markov switching time‐varying parameter vector autoregressive model yields the lowest values for the root mean squared error in most cases, we observe relatively low values for the log predictive density score, when using the Bayesian vector regression model with stochastic volatility. More importantly, our results highlight the importance of uncertainty in forecasting US GNP growth rates.  相似文献   

20.
This paper evaluates the impact of new releases of financial, real activity and survey data on nowcasting euro area gross domestic product (GDP). We show that all three data categories positively impact on the accuracy of GDP nowcasts, whereby the effect is largest in the case of real activity data. When treating variables as if they were all published at the same time and without any time lag, financial series lose all their significance, while survey data remain an important ingredient for the nowcasting exercise. The subsequent analysis shows that the sectoral coverage of survey data, which is broader than that of timely available real activity data, as well as their information content stemming from questions focusing on agents' expectations, are the main sources of the ‘genuine’ predictive power of survey data. When the forecast period is restricted to the 2008–09 financial crisis, the main change is an enhanced forecasting role for financial data. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号