首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
We observe that daily highs and lows of stock prices do not diverge over time and, hence, adopt the cointegration concept and the related vector error correction model (VECM) to model the daily high, the daily low, and the associated daily range data. The in‐sample results attest to the importance of incorporating high–low interactions in modeling the range variable. In evaluating the out‐of‐sample forecast performance using both mean‐squared forecast error and direction of change criteria, it is found that the VECM‐based low and high forecasts offer some advantages over alternative forecasts. The VECM‐based range forecasts, on the other hand, do not always dominate—the forecast rankings depend on the choice of evaluation criterion and the variables being forecast. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
The period of extraordinary volatility in euro area headline inflation starting in 2007 raised the question whether forecast combination methods can be used to hedge against bad forecast performance of single models during such periods and provide more robust forecasts. We investigate this issue for forecasts from a range of short‐term forecasting models. Our analysis shows that there is considerable variation of the relative performance of the different models over time. To take that into account we suggest employing performance‐based forecast combination methods—in particular, one with more weight on the recent forecast performance. We compare such an approach with equal forecast combination that has been found to outperform more sophisticated forecast combination methods in the past, and investigate whether it can improve forecast accuracy over the single best model. The time‐varying weights assign weights to the economic interpretations of the forecast stemming from different models. We also include a number of benchmark models in our analysis. The combination methods are evaluated for HICP headline inflation and HICP excluding food and energy. We investigate how forecast accuracy of the combination methods differs between pre‐crisis times, the period after the global financial crisis and the full evaluation period, including the global financial crisis with its extraordinary volatility in inflation. Overall, we find that forecast combination helps hedge against bad forecast performance and that performance‐based weighting outperforms simple averaging. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

3.
Improving the prediction accuracy of agricultural product futures prices is important for investors, agricultural producers, and policymakers. This is to evade risks and enable government departments to formulate appropriate agricultural regulations and policies. This study employs the ensemble empirical mode decomposition (EEMD) technique to decompose six different categories of agricultural futures prices. Subsequently, three models—support vector machine (SVM), neural network (NN), and autoregressive integrated moving average (ARIMA)—are used to predict the decomposition components. The final hybrid model is then constructed by comparing the prediction performance of the decomposition components. The predicting performance of the combination model is then compared with the benchmark individual models: SVM, NN, and ARIMA. Our main interest in this study is on short-term forecasting, and thus we only consider 1-day and 3-day forecast horizons. The results indicate that the prediction performance of the EEMD combined model is better than that of individual models, especially for the 3-day forecasting horizon. The study also concluded that the machine learning methods outperform the statistical methods in forecasting high-frequency volatile components. However, there is no obvious difference between individual models in predicting low-frequency components.  相似文献   

4.
This study is devoted to gain insight into a timely, accurate, and relevant combining forecast by considering social media (Facebook), opinion polls, and prediction markets. We transformed each type of raw data into the possibility of victory as a forecasting model. Besides the four single forecasts, namely Facebook fans, Facebook “people talking about this” (PTAT) statistics, opinion polls, and prediction markets, we generated three combined forecasts by associating various combinations of the four components. Then, we examined the predictive performance of each forecast on vote shares and the elected/non‐elected outcome across the election period. Our findings, based on the evidence of Taiwan's 2018 county and city elections, showed that incorporating the Facebook PTAT statistic with polls and prediction markets generates the most powerful forecast. Moreover, we recognized the matter of the time horizons where the best proposed model has better accuracy gains in prediction—in the “late of election,” but not in “approaching election”. The patterns of the trend of accuracy across time for each forecasting model also differ from one another. We also highlighted the complementarity of various types of data in the paper because each forecast makes important contributions to forecasting elections.  相似文献   

5.
A variety of recent studies provide a skeptical view on the predictability of stock returns. Empirical evidence shows that most prediction models suffer from a loss of information, model uncertainty, and structural instability by relying on low‐dimensional information sets. In this study, we evaluate the predictive ability of various lately refined forecasting strategies, which handle these issues by incorporating information from many potential predictor variables simultaneously. We investigate whether forecasting strategies that (i) combine information and (ii) combine individual forecasts are useful to predict US stock returns, that is, the market excess return, size, value, and the momentum premium. Our results show that methods combining information have remarkable in‐sample predictive ability. However, the out‐of‐sample performance suffers from highly volatile forecast errors. Forecast combinations face a better bias–efficiency trade‐off, yielding a consistently superior forecast performance for the market excess return and the size premium even after the 1970s.  相似文献   

6.
The TFT‐LCD (thin‐film transistor–liquid crystal display) industry is one of the key global industries with products that have high clock speed. In this research, the LCD monitor market is considered for an empirical study on hierarchical forecasting (HF). The proposed HF methodology consists of five steps. First, the three hierarchical levels of the LCD monitor market are identified. Second, several exogenously driven factors that significantly affect the demand for LCD monitors are identified at each level of product hierarchy. Third, the three forecasting techniques—regression analysis, transfer function, and simultaneous equations model—are combined to forecast future demand at each hierarchical level. Fourth, various forecasting approaches and disaggregating proportion methods are adopted to obtain consistent demand forecasts at each hierarchical level. Finally, the forecast errors with different forecasting approaches are assessed in order to determine the best forecasting level and the best forecasting approach. The findings show that the best forecast results can be obtained by using the middle‐out forecasting approach. These results could guide LCD manufacturers and brand owners on ways to forecast future market demands. Copyright 2008 John Wiley & Sons, Ltd.  相似文献   

7.
We examine different approaches to forecasting monthly US employment growth in the presence of many potentially relevant predictors. We first generate simulated out‐of‐sample forecasts of US employment growth at multiple horizons using individual autoregressive distributed lag (ARDL) models based on 30 potential predictors. We then consider different methods from the extant literature for combining the forecasts generated by the individual ARDL models. Using the mean square forecast error (MSFE) metric, we investigate the performance of the forecast combining methods over the last decade, as well as five periods centered on the last five US recessions. Overall, our results show that a number of combining methods outperform a benchmark autoregressive model. Combining methods based on principal components exhibit the best overall performance, while methods based on simple averaging, clusters, and discount MSFE also perform well. On a cautionary note, some combining methods, such as those based on ordinary least squares, often perform quite poorly. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
We investigate the forecast performance of the fractionally integrated error correction model against several competing models for the prediction of the Nikkei stock average index. The competing models include the martingale model, the vector autoregressive model and the conventional error correction model. We consider models with and without conditional heteroscedasticity. For forecast horizons of over twenty days, the best forecasting performance is obtained for the model when fractional cointegration is combined with conditional heteroscedasticity. Our results reinforce the notion that cointegration and fractional cointegration are important for long‐horizon prediction. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper an investigation is made of the properties and use of two aggregate measures of forecast bias and accuracy. These are metrics used in business to calculate aggregate forecasting performance for a family (group) of products. We find that the aggregate measures are not particularly informative if some of the one‐step‐ahead forecasts are biased. This is likely to be the case in practice if frequently employed forecasting methods are used to generate a large number of individual forecasts. In the paper, examples are constructed to illustrate some potential problems in the use of the metrics. We propose a simple graphical display of forecast bias and accuracy to supplement the information yielded by the accuracy measures. This support includes relevant boxplots of measures of individual forecasting success. This tool is simple but helpful as the graphic display has the potential to indicate forecast deterioration that can be masked by one or both of the aggregate metrics. The procedures are illustrated with data representing sales of food items. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
Most non‐linear techniques give good in‐sample fits to exchange rate data but are usually outperformed by random walks or random walks with drift when used for out‐of‐sample forecasting. In the case of regime‐switching models it is possible to understand why forecasts based on the true model can have higher mean squared error than those of a random walk or random walk with drift. In this paper we provide some analytical results for the case of a simple switching model, the segmented trend model. It requires only a small misclassification, when forecasting which regime the world will be in, to lose any advantage from knowing the correct model specification. To illustrate this we discuss some results for the DM/dollar exchange rate. We conjecture that the forecasting result is more general and describes limitations to the use of switching models for forecasting. This result has two implications. First, it questions the leading role of the random walk hypothesis for the spot exchange rate. Second, it suggests that the mean square error is not an appropriate way to evaluate forecast performance for non‐linear models. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

11.
We evaluate forecasting models of US business fixed investment spending growth over the recent 1995:1–2004:2 out‐of‐sample period. The forecasting models are based on the conventional Accelerator, Neoclassical, Average Q, and Cash‐Flow models of investment spending, as well as real stock prices and excess stock return predictors. The real stock price model typically generates the most accurate forecasts, and forecast‐encompassing tests indicate that this model contains most of the information useful for forecasting investment spending growth relative to the other models at longer horizons. In a robustness check, we also evaluate the forecasting performance of the models over two alternative out‐of‐sample periods: 1975:1–1984:4 and 1985:1–1994:4. A number of different models produce the most accurate forecasts over these alternative out‐of‐sample periods, indicating that while the real stock price model appears particularly useful for forecasting the recent behavior of investment spending growth, it may not continue to perform well in future periods. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

12.
This paper uses the dynamic factor model framework, which accommodates a large cross‐section of macroeconomic time series, for forecasting regional house price inflation. In this study, we forecast house price inflation for five metropolitan areas of South Africa using principal components obtained from 282 quarterly macroeconomic time series in the period 1980:1 to 2006:4. The results, based on the root mean square errors of one to four quarters ahead out‐of‐sample forecasts over the period 2001:1 to 2006:4 indicate that, in the majority of the cases, the Dynamic Factor Model statistically outperforms the vector autoregressive models, using both the classical and the Bayesian treatments. We also consider spatial and non‐spatial specifications. Our results indicate that macroeconomic fundamentals in forecasting house price inflation are important. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
We investigate the forecasting ability of the most commonly used benchmarks in financial economics. We approach the usual caveats of probabilistic forecasts studies—small samples, limited models, and nonholistic validations—by performing a comprehensive comparison of 15 predictive schemes during a time period of over 21 years. All densities are evaluated in terms of their statistical consistency, local accuracy and forecasting errors. Using a new composite indicator, the integrated forecast score, we show that risk‐neutral densities outperform historical‐based predictions in terms of information content. We find that the variance gamma model generates the highest out‐of‐sample likelihood of observed prices and the lowest predictive errors, whereas the GARCH‐based GJR‐FHS delivers the most consistent forecasts across the entire density range. In contrast, lognormal densities, the Heston model, or the nonparametric Breeden–Litzenberger formula yield biased predictions and are rejected in statistical tests.  相似文献   

14.
Model uncertainty and recurrent or cyclical structural changes in macroeconomic time series dynamics are substantial challenges to macroeconomic forecasting. This paper discusses a macro variable forecasting methodology that combines model uncertainty and regime switching simultaneously. The proposed predictive regression specification permits both regime switching of the regression parameters and uncertainty about the inclusion of forecasting variables by employing Bayesian model averaging. In an empirical exercise involving quarterly US inflation, we observed that our Bayesian model averaging with regime switching leads to substantial improvements in forecast performance, particularly in the medium horizon (two to four quarters). Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
In multivariate volatility prediction, identifying the optimal forecasting model is not always a feasible task. This is mainly due to the curse of dimensionality typically affecting multivariate volatility models. In practice only a subset of the potentially available models can be effectively estimated, after imposing severe constraints on the dynamic structure of the volatility process. It follows that in most applications the working forecasting model can be severely misspecified. This situation leaves scope for the application of forecast combination strategies as a tool for improving the predictive accuracy. The aim of the paper is to propose some alternative combination strategies and compare their performances in forecasting high‐dimensional multivariate conditional covariance matrices for a portfolio of US stock returns. In particular, we will consider the combination of volatility predictions generated by multivariate GARCH models, based on daily returns, and dynamic models for realized covariance matrices, built from intra‐daily returns. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
This paper examines the relative importance of allowing for time‐varying volatility and country interactions in a forecast model of economic activity. Allowing for these issues is done by augmenting autoregressive models of growth with cross‐country weighted averages of growth and the generalized autoregressive conditional heteroskedasticity framework. The forecasts are evaluated using statistical criteria through point and density forecasts, and an economic criterion based on forecasting recessions. The results show that, compared to an autoregressive model, both components improve forecast ability in terms of point and density forecasts, especially one‐period‐ahead forecasts, but that the forecast ability is not stable over time. The random walk model, however, still dominates in terms of forecasting recessions.  相似文献   

17.
A large number of models have been developed in the literature to analyze and forecast changes in output dynamics. The objective of this paper was to compare the predictive ability of univariate and bivariate models, in terms of forecasting US gross national product (GNP) growth at different forecasting horizons, with the bivariate models containing information on a measure of economic uncertainty. Based on point and density forecast accuracy measures, as well as on equal predictive ability (EPA) and superior predictive ability (SPA) tests, we evaluate the relative forecasting performance of different model specifications over the quarterly period of 1919:Q2 until 2014:Q4. We find that the economic policy uncertainty (EPU) index should improve the accuracy of US GNP growth forecasts in bivariate models. We also find that the EPU exhibits similar forecasting ability to the term spread and outperforms other uncertainty measures such as the volatility index and geopolitical risk in predicting US recessions. While the Markov switching time‐varying parameter vector autoregressive model yields the lowest values for the root mean squared error in most cases, we observe relatively low values for the log predictive density score, when using the Bayesian vector regression model with stochastic volatility. More importantly, our results highlight the importance of uncertainty in forecasting US GNP growth rates.  相似文献   

18.
Following recent non‐linear extensions of the present‐value model, this paper examines the out‐of‐sample forecast performance of two parametric and two non‐parametric nonlinear models of stock returns. The parametric models include the standard regime switching and the Markov regime switching, whereas the non‐parametric are the nearest‐neighbour and the artificial neural network models. We focused on the US stock market using annual observations spanning the period 1872–1999. Evaluation of forecasts was based on two criteria, namely forecast accuracy and forecast encompassing. In terms of accuracy, the Markov and the artificial neural network models produce at least as accurate forecasts as the other models. In terms of encompassing, the Markov model outperforms all the others. Overall, both criteria suggest that the Markov regime switching model is the most preferable non‐linear empirical extension of the present‐value model for out‐of‐sample stock return forecasting. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

19.
While much research related to forecasting return volatility does so in a univariate setting, this paper includes proxies for information flows to forecast intra‐day volatility for the IBEX 35 futures market. The belief is that volume or the number of transactions conveys important information about the market that may be useful in forecasting. Our results suggest that augmenting a variety of GARCH‐type models with these proxies lead to improved forecasts across a range of intra‐day frequencies. Furthermore, our results present an interesting picture whereby the PARCH model generally performs well at the highest frequencies and shorter forecasting horizons, whereas the component model performs well at lower frequencies and longer forecast horizons. Both models attempt to capture long memory; the PARCH model allows for exponential decay in the autocorrelation function, while the component model captures trend volatility, which dominates over a longer horizon. These characteristics are likely to explain the success of each model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper, we investigate the time series properties of S&P 100 volatility and the forecasting performance of different volatility models. We consider several nonparametric and parametric volatility measures, such as implied, realized and model‐based volatility, and show that these volatility processes exhibit an extremely slow mean‐reverting behavior and possible long memory. For this reason, we explicitly model the near‐unit root behavior of volatility and construct median unbiased forecasts by approximating the finite‐sample forecast distribution using bootstrap methods. Furthermore, we produce prediction intervals for the next‐period implied volatility that provide important information about the uncertainty surrounding the point forecasts. Finally, we apply intercept corrections to forecasts from misspecified models which dramatically improve the accuracy of the volatility forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号