首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

2.
This paper evaluates the performance of conditional variance models using high‐frequency data of the National Stock Index (S&P CNX NIFTY) and attempts to determine the optimal sampling frequency for the best daily volatility forecast. A linear combination of the realized volatilities calculated at two different frequencies is used as benchmark to evaluate the volatility forecasting ability of the conditional variance models (GARCH (1, 1)) at different sampling frequencies. From the analysis, it is found that sampling at 30 minutes gives the best forecast for daily volatility. The forecasting ability of these models is deteriorated, however, by the non‐normal property of mean adjusted returns, which is an assumption in conditional variance models. Nevertheless, the optimum frequency remained the same even in the case of different models (EGARCH and PARCH) and different error distribution (generalized error distribution, GED) where the error is reduced to a certain extent by incorporating the asymmetric effect on volatility. Our analysis also suggests that GARCH models with GED innovations or EGRACH and PARCH models would give better estimates of volatility with lower forecast error estimates. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
This paper subjects six alternative indicators of global economic activity to empirically examine their relative predictive powers in the forecast of crude oil market volatility. GARCH-MIDAS approach is constructed to accommodate all the relevant series at their available data frequencies, thereby circumventing information loss and any associated bias. We find evidence in support of global economic activity as a good predictor of energy market volatility. Our forecast evaluation of the various indicators places a higher weight on the newly developed indicator of global economic activity which is based on a set of 16 variables covering multiple dimensions of the global economy, whereas other indicators do not seem to capture. Furthermore, we find that accounting for any inherent asymmetry in the global economic activity proxies improves the forecast accuracy of the GARCH-MIDAS-X model for oil volatility. The results leading to these conclusions are robust to multiple forecast horizons and consistent across alternative energy sources.  相似文献   

4.
Empirical mode decomposition (EMD)‐based ensemble methods have become increasingly popular in the research field of forecasting, substantially enhancing prediction accuracy. The key factor in this type of method is the multiscale decomposition that immensely mitigates modeling complexity. Accordingly, this study probes this factor and makes further innovations from a new perspective of multiscale complexity. In particular, this study quantitatively investigates the relationship between the decomposition performance and prediction accuracy, thereby developing (1) a novel multiscale complexity measurement (for evaluating multiscale decomposition), (2) a novel optimized EMD (OEMD) (considering multiscale complexity), and (3) a novel OEMD‐based forecasting methodology (using the proposed OEMD in multiscale analysis). With crude oil and natural gas prices as samples, the empirical study statistically indicates that the forecasting capability of EMD‐based methods is highly reliant on the decomposition performance; accordingly, the proposed OEMD‐based methods considering multiscale complexity significantly outperform the benchmarks based on typical EMDs in prediction accuracy.  相似文献   

5.
While much research related to forecasting return volatility does so in a univariate setting, this paper includes proxies for information flows to forecast intra‐day volatility for the IBEX 35 futures market. The belief is that volume or the number of transactions conveys important information about the market that may be useful in forecasting. Our results suggest that augmenting a variety of GARCH‐type models with these proxies lead to improved forecasts across a range of intra‐day frequencies. Furthermore, our results present an interesting picture whereby the PARCH model generally performs well at the highest frequencies and shorter forecasting horizons, whereas the component model performs well at lower frequencies and longer forecast horizons. Both models attempt to capture long memory; the PARCH model allows for exponential decay in the autocorrelation function, while the component model captures trend volatility, which dominates over a longer horizon. These characteristics are likely to explain the success of each model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
Successful market timing strategies depend on superior forecasting ability. We use a sentiment index model, a kitchen sink logistic regression model, and a machine learning model (least absolute shrinkage and selection operator, LASSO) to forecast 1‐month‐ahead S&P 500 Index returns. In order to determine how successful each strategy is at forecasting the market direction, a “beta optimization” strategy is implemented. We find that the LASSO model outperforms the other models with consistently higher annual returns and lower monthly drawdowns.  相似文献   

7.
This paper evaluates the accuracy of 1‐month‐ahead systematic (beta) risk forecasts in three return measurement settings; monthly, daily and 30 minutes. It was found that the popular Fama–MacBeth beta from 5 years of monthly returns generates the most accurate beta forecast among estimators based on monthly returns. A realized beta estimator from daily returns over the prior year generates the most accurate beta forecast among estimators based on daily returns. A realized beta estimator from 30‐minute returns over the prior 2 months generates the most accurate beta forecast among estimators based on 30‐minute returns. In environments where low‐, medium‐ and high‐frequency returns are accurately available, beta forecasting with low‐frequency returns are the least accurate and beta forecasting with high‐frequency returns are the most accurate. The improvements in precision of the beta forecasts are demonstrated in portfolio optimization for a targeted beta exposure. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Empirical experiments have shown that macroeconomic variables can affect the volatility of stock market. However, the frequencies of macroeconomic variables are low and different from the stock market volatility, and few literature considers the low-frequency macroeconomic variables as input indicators for deep learning models. In this paper, we forecast the stock market volatility incorporating low-frequency macroeconomic variables based on a hybrid model integrating the deep learning method with generalized autoregressive conditional heteroskedasticity and mixed data sampling (GARCH-MIDAS) model to process the mixing frequency data. This paper firstly takes macroeconomic variables as exogenous variables then uses the GARCH-MIDAS model to deal with the problem of different frequencies between the macroeconomic variables and stock market volatility and to forecast the short-term volatility and finally takes the predicted short-term volatility as the input indicator into machine learning and deep learning models to forecast the realized volatility of stock market. It is found that adding macroeconomic variables can significantly improve the forecasting ability in the comparison of the forecasting effects of the same model before and after adding the macroeconomic variables. Additionally, in the comparison of the forecasting effects among different models, it is also found that the forecasting effect of the deep learning model is the best, the machine learning model is worse, and the traditional econometric model is the worst.  相似文献   

9.
We propose a wavelet neural network (neuro‐wavelet) model for the short‐term forecast of stock returns from high‐frequency financial data. The proposed hybrid model combines the capability of wavelets and neural networks to capture non‐stationary nonlinear attributes embedded in financial time series. A comparison study was performed on the predictive power of two econometric models and four recurrent neural network topologies. Several statistical measures were applied to the predictions and standard errors to evaluate the performance of all models. A Jordan net that used as input the coefficients resulting from a non‐decimated wavelet‐based multi‐resolution decomposition of an exogenous signal showed a consistent superior forecasting performance. Reasonable forecasting accuracy for the one‐, three‐ and five step‐ahead horizons was achieved by the proposed model. The procedure used to build the neuro‐wavelet model is reusable and can be applied to any high‐frequency financial series to specify the model characteristics associated with that particular series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
The versatility of the one‐dimensional discrete wavelet analysis combined with wavelet and Burg extensions for forecasting financial times series with distinctive properties is illustrated with market data. Any time series of financial assets may be decomposed into simpler signals called approximations and details in the framework of the one‐dimensional discrete wavelet analysis. The simplified signals are recomposed after extension. The final output is the forecasted time series which is compared to observed data. Results show the pertinence of adding spectrum analysis to the battery of tools used by econometricians and quantitative analysts for the forecast of economic or financial time series.  相似文献   

11.
The availability of numerous modeling approaches for volatility forecasting leads to model uncertainty for both researchers and practitioners. A large number of studies provide evidence in favor of combination methods for forecasting a variety of financial variables, but most of them are implemented on returns forecasting and evaluate their performance based solely on statistical evaluation criteria. In this paper, we combine various volatility forecasts based on different combination schemes and evaluate their performance in forecasting the volatility of the S&P 500 index. We use an exhaustive variety of combination methods to forecast volatility, ranging from simple techniques to time-varying techniques based on the past performance of the single models and regression techniques. We then evaluate the forecasting performance of single and combination volatility forecasts based on both statistical and economic loss functions. The empirical analysis in this paper yields an important conclusion. Although combination forecasts based on more complex methods perform better than the simple combinations and single models, there is no dominant combination technique that outperforms the rest in both statistical and economic terms.  相似文献   

12.
Inspired by the commonly held view that international stock market volatility is equivalent to cross-market information flow, we propose various ways of constructing two types of information flow, based on realized volatility (RV) and implied volatility (IV), in multiple international markets. We focus on the RVs derived from the intraday prices of eight international stock markets and use a heterogeneous autoregressive framework to forecast the future volatility of each market for 1 day to 22 days ahead. Our Diebold-Mariano tests provide strong evidence that information flow with IV enhances the accuracy of forecasting international RVs over all of the prediction horizons. The results of a model confidence set test show that a market's own IV and the first principal component of the international IVs exhibit the strongest predictive ability. In addition, the use of information flows with IV can further increase economic returns. Our results are supported by the findings of a wide range of robustness checks.  相似文献   

13.
This paper uses Markov switching models to capture volatility dynamics in exchange rates and to evaluate their forecasting ability. We identify that increased volatilities in four euro‐based exchange rates are due to underlying structural changes. Also, we find that currencies are closely related to each other, especially in high‐volatility periods, where cross‐correlations increase significantly. Using Markov switching Monte Carlo approach we provide evidence in favour of Markov switching models, rejecting random walk hypothesis. Testing in‐sample and out‐of‐sample Markov trading rules based on Dueker and Neely (Journal of Banking and Finance, 2007) we find that using econometric methodology is able to forecast accurately exchange rate movements. When applied to the Euro/US dollar and the euro/British pound daily returns data, the model provides exceptional out‐of‐sample returns. However, when applied to the euro/Brazilian real and the euro/Mexican peso, the model loses power. Higher volatility exercised in the Latin American currencies seems to be a critical factor for this failure. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

14.
The TFT‐LCD (thin‐film transistor–liquid crystal display) industry is one of the key global industries with products that have high clock speed. In this research, the LCD monitor market is considered for an empirical study on hierarchical forecasting (HF). The proposed HF methodology consists of five steps. First, the three hierarchical levels of the LCD monitor market are identified. Second, several exogenously driven factors that significantly affect the demand for LCD monitors are identified at each level of product hierarchy. Third, the three forecasting techniques—regression analysis, transfer function, and simultaneous equations model—are combined to forecast future demand at each hierarchical level. Fourth, various forecasting approaches and disaggregating proportion methods are adopted to obtain consistent demand forecasts at each hierarchical level. Finally, the forecast errors with different forecasting approaches are assessed in order to determine the best forecasting level and the best forecasting approach. The findings show that the best forecast results can be obtained by using the middle‐out forecasting approach. These results could guide LCD manufacturers and brand owners on ways to forecast future market demands. Copyright 2008 John Wiley & Sons, Ltd.  相似文献   

15.
The increase in oil price volatility in recent years has raised the importance of forecasting it accurately for valuing and hedging investments. The paper models and forecasts the crude oil exchange‐traded funds (ETF) volatility index, which has been used in the last years as an important alternative measure to track and analyze the volatility of future oil prices. Analysis of the oil volatility index suggests that it presents features similar to those of the daily market volatility index, such as long memory, which is modeled using well‐known heterogeneous autoregressive (HAR) specifications and new extensions that are based on net and scaled measures of oil price changes. The aim is to improve the forecasting performance of the traditional HAR models by including predictors that capture the impact of oil price changes on the economy. The performance of the new proposals and benchmarks is evaluated with the model confidence set (MCS) and the Generalized‐AutoContouR (G‐ACR) tests in terms of point forecasts and density forecasting, respectively. We find that including the leverage in the conditional mean or variance of the basic HAR model increases its predictive ability. Furthermore, when considering density forecasting, the best models are a conditional heteroskedastic HAR model that includes a scaled measure of oil price changes, and a HAR model with errors following an exponential generalized autoregressive conditional heteroskedasticity specification. In both cases, we consider a flexible distribution for the errors of the conditional heteroskedastic process.  相似文献   

16.
In the paper, we undertake a detailed empirical verification of wavelet scaling as a forecasting method through its application to a large set of noisy data. The method consists of two steps. In the first, the data are smoothed with the help of wavelet estimators of stochastic signals based on the idea of scaling, and, in the second, an AR(I)MA model is built on the estimated signal. This procedure is compared with some alternative approaches encompassing exponential smoothing, moving average, AR(I)MA and regularized AR models. Special attention is given to the ways of treating boundary regions in the wavelet signal estimation and to the use of biased, weakly biased and unbiased estimators of the wavelet variance. According to a collection of popular forecast accuracy measures, when applied to noisy time series with a high level of noise, wavelet scaling is able to outperform the other forecasting procedures, although this conclusion applies mainly to longer time series and not uniformly across all the examined accuracy measures.  相似文献   

17.
Using the generalized dynamic factor model, this study constructs three predictors of crude oil price volatility: a fundamental (physical) predictor, a financial predictor, and a macroeconomic uncertainty predictor. Moreover, an event‐triggered predictor is constructed using data extracted from Google Trends. We construct GARCH‐MIDAS (generalized autoregressive conditional heteroskedasticity–mixed‐data sampling) models combining realized volatility with the predictors to predict oil price volatility at different forecasting horizons. We then identify the predictive power of the realized volatility and the predictors by the model confidence set (MCS) test. The findings show that, among the four indexes, the financial predictor has the most predictive power for crude oil volatility, which provides strong evidence that financialization has been the key determinant of crude oil price behavior since the 2008 global financial crisis. In addition, the fundamental predictor, followed by the financial predictor, effectively forecasts crude oil price volatility in the long‐run forecasting horizons. Our findings indicate that the different predictors can provide distinct predictive information at the different horizons given the specific market situation. These findings have useful implications for market traders in terms of managing crude oil price risk.  相似文献   

18.
The aim of this study was to forecast the Singapore gross domestic product (GDP) growth rate by employing the mixed‐data sampling (MIDAS) approach using mixed and high‐frequency financial market data from Singapore, and to examine whether the high‐frequency financial variables could better predict the macroeconomic variables. We adopt different time‐aggregating methods to handle the high‐frequency data in order to match the sampling rate of lower‐frequency data in our regression models. Our results showed that MIDAS regression using high‐frequency stock return data produced a better forecast of GDP growth rate than the other models, and the best forecasting performance was achieved by using weekly stock returns. The forecasting result was further improved by performing intra‐period forecasting.  相似文献   

19.
Using option market data we derive naturally forward‐looking, nonparametric and model‐free risk estimates, three desired characteristics hardly obtainable using historical returns. The option‐implied measures are only based on the first derivative of the option price with respect to the strike price, bypassing the difficult task of estimating the tail of the return distribution. We estimate and backtest the 1%, 2.5%, and 5% WTI crude oil futures option‐implied value at risk and conditional value at risk for the turbulent years 2011–2016 and for both tails of the distribution. Compared with risk estimations based on the filtered historical simulation methodology, our results show that the option‐implied risk metrics are valid alternatives to the statistically based historical models.  相似文献   

20.
A variety of recent studies provide a skeptical view on the predictability of stock returns. Empirical evidence shows that most prediction models suffer from a loss of information, model uncertainty, and structural instability by relying on low‐dimensional information sets. In this study, we evaluate the predictive ability of various lately refined forecasting strategies, which handle these issues by incorporating information from many potential predictor variables simultaneously. We investigate whether forecasting strategies that (i) combine information and (ii) combine individual forecasts are useful to predict US stock returns, that is, the market excess return, size, value, and the momentum premium. Our results show that methods combining information have remarkable in‐sample predictive ability. However, the out‐of‐sample performance suffers from highly volatile forecast errors. Forecast combinations face a better bias–efficiency trade‐off, yielding a consistently superior forecast performance for the market excess return and the size premium even after the 1970s.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号