首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a new methodology for filtering and forecasting the latent variance in a two‐factor diffusion process with jumps from a continuous‐time perspective. For this purpose we use a continuous‐time Markov chain approximation with a finite state space. Essentially, we extend Markov chain filters to processes of higher dimensions. We assess forecastability of the models under consideration by measuring forecast error of model expected realized variance, trading in variance swap contracts, producing value‐at‐risk estimates as well as examining sign forecastability. We provide empirical evidence using two sources, the S&P 500 index values and its corresponding cumulative risk‐neutral expected variance (namely the VIX index). Joint estimation reveals the market prices of equity and variance risk implicit by the two probability measures. A further simulation study shows that the proposed methodology can filter the variance of virtually any type of diffusion process (coupled with a jump process) with a non‐analytical density function. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
Three general classes of state space models are presented, using the single source of error formulation. The first class is the standard linear model with homoscedastic errors, the second retains the linear structure but incorporates a dynamic form of heteroscedasticity, and the third allows for non‐linear structure in the observation equation as well as heteroscedasticity. These three classes provide stochastic models for a wide variety of exponential smoothing methods. We use these classes to provide exact analytic (matrix) expressions for forecast error variances that can be used to construct prediction intervals one or multiple steps ahead. These formulas are reduced to non‐matrix expressions for 15 state space models that underlie the most common exponential smoothing methods. We discuss relationships between our expressions and previous suggestions for finding forecast error variances and prediction intervals for exponential smoothing methods. Simpler approximations are developed for the more complex schemes and their validity examined. The paper concludes with a numerical example using a non‐linear model. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
In this paper we compare several multi‐period volatility forecasting models, specifically from MIDAS and HAR families. We perform our comparisons in terms of out‐of‐sample volatility forecasting accuracy. We also consider combinations of the models' forecasts. Using intra‐daily returns of the BOVESPA index, we calculate volatility measures such as realized variance, realized power variation and realized bipower variation to be used as regressors in both models. Further, we use a nonparametric procedure for separately measuring the continuous sample path variation and the discontinuous jump part of the quadratic variation process. Thus MIDAS and HAR specifications with the continuous sample path and jump variability measures as separate regressors are estimated. Our results in terms of mean squared error suggest that regressors involving volatility measures which are robust to jumps (i.e. realized bipower variation and realized power variation) are better at forecasting future volatility. However, we find that, in general, the forecasts based on these regressors are not statistically different from those based on realized variance (the benchmark regressor). Moreover, we find that, in general, the relative forecasting performances of the three approaches (i.e. MIDAS, HAR and forecast combinations) are statistically equivalent. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
We measure the performance of multi‐model inference (MMI) forecasts compared to predictions made from a single model for crude oil prices. We forecast the West Texas Intermediate (WTI) crude oil spot prices using total OECD petroleum inventory levels, surplus production capacity, the Chicago Board Options Exchange Volatility Index and an implementation of a subset autoregression with exogenous variables (SARX). Coefficient and standard error estimates obtained from SARX determined by conditioning on a single ‘best model’ ignore model uncertainty and result in underestimated standard errors and overestimated coefficients. We find that the MMI forecast outperforms a single‐model forecast for both in‐ and out‐of‐sample datasets over a variety of statistical performance measures, and further find that weighting models according to the Bayesian information criterion generally yields superior results both in and out of sample when compared to the Akaike information criterion. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
This paper evaluates the performance of conditional variance models using high‐frequency data of the National Stock Index (S&P CNX NIFTY) and attempts to determine the optimal sampling frequency for the best daily volatility forecast. A linear combination of the realized volatilities calculated at two different frequencies is used as benchmark to evaluate the volatility forecasting ability of the conditional variance models (GARCH (1, 1)) at different sampling frequencies. From the analysis, it is found that sampling at 30 minutes gives the best forecast for daily volatility. The forecasting ability of these models is deteriorated, however, by the non‐normal property of mean adjusted returns, which is an assumption in conditional variance models. Nevertheless, the optimum frequency remained the same even in the case of different models (EGARCH and PARCH) and different error distribution (generalized error distribution, GED) where the error is reduced to a certain extent by incorporating the asymmetric effect on volatility. Our analysis also suggests that GARCH models with GED innovations or EGRACH and PARCH models would give better estimates of volatility with lower forecast error estimates. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
This paper estimates, using stochastic simulation and a multi‐country macroeconometric model, the fraction of the forecast error variance of output changes and the fraction of the forecast error variance of inflation that are due to unpredictable asset price changes. The results suggest that between about 25% and 37% of the forecast error variance of output growth over eight quarters is due to asset price changes and between about 33% and 60% of the forecast error variance of inflation over eight quarters is due to asset price changes. These estimates provide limits to the accuracy that can be expected from macroeconomic forecasting. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
This paper proposes an algorithm that uses forecast encompassing tests for combining forecasts when there are a large number of forecasts that might enter the combination. The algorithm excludes a forecast from the combination if it is encompassed by another forecast. To assess the usefulness of this approach, an extensive empirical analysis is undertaken using a US macroeconomic dataset. The results are encouraging; the algorithm forecasts outperform benchmark model forecasts, in a mean square error (MSE) sense, in a majority of cases. The paper also compares the empirical performance of different approaches to forecast combination, and provides a rule‐of‐thumb cut‐off point for the thick‐modeling approach. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
Following recent non‐linear extensions of the present‐value model, this paper examines the out‐of‐sample forecast performance of two parametric and two non‐parametric nonlinear models of stock returns. The parametric models include the standard regime switching and the Markov regime switching, whereas the non‐parametric are the nearest‐neighbour and the artificial neural network models. We focused on the US stock market using annual observations spanning the period 1872–1999. Evaluation of forecasts was based on two criteria, namely forecast accuracy and forecast encompassing. In terms of accuracy, the Markov and the artificial neural network models produce at least as accurate forecasts as the other models. In terms of encompassing, the Markov model outperforms all the others. Overall, both criteria suggest that the Markov regime switching model is the most preferable non‐linear empirical extension of the present‐value model for out‐of‐sample stock return forecasting. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we propose and test a forecasting model on monthly and daily spot prices of five selected exchange rates. In doing so, we combine a novel smoothing technique (initially applied in signal processing) with a variable selection methodology and two regression estimation methodologies from the field of machine learning (ML). After the decomposition of the original exchange rate series using an ensemble empirical mode decomposition (EEMD) method into a smoothed and a fluctuation component, multivariate adaptive regression splines (MARS) are used to select the most appropriate variable set from a large set of explanatory variables that we collected. The selected variables are then fed into two distinctive support vector machines (SVR) models that produce one‐period‐ahead forecasts for the two components. Neural networks (NN) are also considered as an alternative to SVR. The sum of the two forecast components is the final forecast of the proposed scheme. We show that the above implementation exhibits a superior in‐sample and out‐of‐sample forecasting ability when compared to alternative forecasting models. The empirical results provide evidence against the efficient market hypothesis for the selected foreign exchange markets. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
Using quantile regression this paper explores the predictability of the stock and bond return distributions as a function of economic state variables. The use of quantile regression allows us to examine specific parts of the return distribution such as the tails and the center, and for a sufficiently fine grid of quantiles we can trace out the entire distribution. A univariate quantile regression model is used to examine the marginal stock and bond return distributions, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that economic state variables predict the stock and bond return distributions in quite different ways in terms of, for example, location shifts, volatility and skewness. Comparing the different economic state variables in terms of their out‐of‐sample forecasting performance, the empirical analysis also shows that the relative accuracy of the state variables varies across the return distribution. Density forecasts based on an assumed normal distribution with forecasted mean and variance is compared to forecasts based on quantile estimates and, in general, the latter yields the best performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
Volatility models such as GARCH, although misspecified with respect to the data‐generating process, may well generate volatility forecasts that are unconditionally unbiased. In other words, they generate variance forecasts that, on average, are equal to the integrated variance. However, many applications in finance require a measure of return volatility that is a non‐linear function of the variance of returns, rather than of the variance itself. Even if a volatility model generates forecasts of the integrated variance that are unbiased, non‐linear transformations of these forecasts will be biased estimators of the same non‐linear transformations of the integrated variance because of Jensen's inequality. In this paper, we derive an analytical approximation for the unconditional bias of estimators of non‐linear transformations of the integrated variance. This bias is a function of the volatility of the forecast variance and the volatility of the integrated variance, and depends on the concavity of the non‐linear transformation. In order to estimate the volatility of the unobserved integrated variance, we employ recent results from the realized volatility literature. As an illustration, we estimate the unconditional bias for both in‐sample and out‐of‐sample forecasts of three non‐linear transformations of the integrated standard deviation of returns for three exchange rate return series, where a GARCH(1, 1) model is used to forecast the integrated variance. Our estimation results suggest that, in practice, the bias can be substantial. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
We present a mixed‐frequency model for daily forecasts of euro area inflation. The model combines a monthly index of core inflation with daily data from financial markets; estimates are carried out with the MIDAS regression approach. The forecasting ability of the model in real time is compared with that of standard VARs and of daily quotes of economic derivatives on euro area inflation. We find that the inclusion of daily variables helps to reduce forecast errors with respect to models that consider only monthly variables. The mixed‐frequency model also displays superior predictive performance with respect to forecasts solely based on economic derivatives. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
A widely used approach to evaluating volatility forecasts uses a regression framework which measures the bias and variance of the forecast. We show that the associated test for bias is inappropriate before introducing a more suitable procedure which is based on the test for bias in a conditional mean forecast. Although volatility has been the most common measure of the variability in a financial time series, in many situations confidence interval forecasts are required. We consider the evaluation of interval forecasts and present a regression‐based procedure which uses quantile regression to assess quantile estimator bias and variance. We use exchange rate data to illustrate the proposal by evaluating seven quantile estimators, one of which is a new non‐parametric autoregressive conditional heteroscedasticity quantile estimator. The empirical analysis shows that the new evaluation procedure provides useful insight into the quality of quantile estimators. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

14.
Value‐at‐risk (VaR) forecasting generally relies on a parametric density function of portfolio returns that ignores higher moments or assumes them constant. In this paper, we propose a simple approach to forecasting of a portfolio VaR. We employ the Gram‐Charlier expansion (GCE) augmenting the standard normal distribution with the first four moments, which are allowed to vary over time. In an extensive empirical study, we compare the GCE approach to other models of VaR forecasting and conclude that it provides accurate and robust estimates of the realized VaR. In spite of its simplicity, on our dataset GCE outperforms other estimates that are generated by both constant and time‐varying higher‐moments models. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

16.
This paper constructs a forecast method that obtains long‐horizon forecasts with improved performance through modification of the direct forecast approach. Direct forecasts are more robust to model misspecification compared to iterated forecasts, which makes them preferable in long horizons. However, direct forecast estimates tend to have jagged shapes across horizons. Our forecast method aims to “smooth out” erratic estimates across horizons while maintaining the robust aspect of direct forecasts through ridge regression, which is a restricted regression on the first differences of regression coefficients. The forecasts are compared to the conventional iterated and direct forecasts in two empirical applications: real oil prices and US macroeconomic series. In both applications, our method shows improvement over direct forecasts.  相似文献   

17.
Asymmetry has been well documented in the business cycle literature. The asymmetric business cycle suggests that major macroeconomic series, such as a country's unemployment rate, are non‐linear and, therefore, the use of linear models to explain their behaviour and forecast their future values may not be appropriate. Many researchers have focused on providing evidence for the non‐linearity in the unemployment series. Only recently have there been some developments in applying non‐linear models to estimate and forecast unemployment rates. A major concern of non‐linear modelling is the model specification problem; it is very hard to test all possible non‐linear specifications, and to select the most appropriate specification for a particular model. Artificial neural network (ANN) models provide a solution to the difficulty of forecasting unemployment over the asymmetric business cycle. ANN models are non‐linear, do not rely upon the classical regression assumptions, are capable of learning the structure of all kinds of patterns in a data set with a specified degree of accuracy, and can then use this structure to forecast future values of the data. In this paper, we apply two ANN models, a back‐propagation model and a generalized regression neural network model to estimate and forecast post‐war aggregate unemployment rates in the USA, Canada, UK, France and Japan. We compare the out‐of‐sample forecast results obtained by the ANN models with those obtained by several linear and non‐linear times series models currently used in the literature. It is shown that the artificial neural network models are able to forecast the unemployment series as well as, and in some cases better than, the other univariate econometrics time series models in our test. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

18.
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock Exchange Composite Index and the five sectoral stock indices in Chinese stock markets for the period 4 January 2000 to 30 December 2011. We then conduct both in‐sample tests and out‐of‐sample forecasts to examine the effects of structural breaks on the performance of ARFIMAX‐FIGARCH models for the realized volatility forecast by utilizing a variety of estimation window sizes designed to accommodate potential structural breaks. The results of the in‐sample tests show that there are multiple breaks in all realized volatility series. The results of the out‐of‐sample point forecasts indicate that the combination forecasts with time‐varying weights across individual forecast models estimated with different estimation windows perform well. In particular, nonlinear combination forecasts with the weights chosen based on a non‐parametric kernel regression and linear combination forecasts with the weights chosen based on the non‐negative restricted least squares and Schwarz information criterion appear to be the most accurate methods in point forecasting for realized volatility under structural breaks. We also conduct an interval forecast of the realized volatility for the combination approaches, and find that the interval forecast for nonlinear combination approaches with the weights chosen according to a non‐parametric kernel regression performs best among the competing models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
We propose an economically motivated forecast combination strategy in which model weights are related to portfolio returns obtained by a given forecast model. An empirical application based on an optimal mean–variance bond portfolio problem is used to highlight the advantages of the proposed approach with respect to combination methods based on statistical measures of forecast accuracy. We compute average net excess returns, standard deviation, and the Sharpe ratio of bond portfolios obtained with nine alternative yield curve specifications, as well as with 12 different forecast combination strategies. Return‐based forecast combination schemes clearly outperformed approaches based on statistical measures of forecast accuracy in terms of economic criteria. Moreover, return‐based approaches that dynamically select only the model with highest weight each period and discard all other models delivered even better results, evidencing not only the advantages of trimming forecast combinations but also the ability of the proposed approach to detect best‐performing models. To analyze the robustness of our results, different levels of risk aversion and a different dataset are considered.  相似文献   

20.
As a consequence of recent technological advances and the proliferation of algorithmic and high‐frequency trading, the cost of trading in financial markets has irrevocably changed. One important change, known as price impact, relates to how trading affects prices. Price impact represents the largest cost associated with trading. Forecasting price impact is very important as it can provide estimates of trading profits after costs and also suggest optimal execution strategies. Although several models have recently been developed which may forecast the immediate price impact of individual trades, limited work has been done to compare their relative performance. We provide a comprehensive performance evaluation of these models and test for statistically significant outperformance amongst candidate models using out‐of‐sample forecasts. We find that normalizing price impact by its average value significantly enhances the performance of traditional non‐normalized models as the normalization factor captures some of the dynamics of price impact. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号