首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Volatility forecasting remains an active area of research with no current consensus as to the model that provides the most accurate forecasts, though Hansen and Lunde (2005) have argued that in the context of daily exchange rate returns nothing can beat a GARCH(1,1) model. This paper extends that line of research by utilizing intra‐day data and obtaining daily volatility forecasts from a range of models based upon the higher‐frequency data. The volatility forecasts are appraised using four different measures of ‘true’ volatility and further evaluated using regression tests of predictive power, forecast encompassing and forecast combination. Our results show that the daily GARCH(1,1) model is largely inferior to all other models, whereas the intra‐day unadjusted‐data GARCH(1,1) model generally provides superior forecasts compared to all other models. Hence, while it appears that a daily GARCH(1,1) model can be beaten in obtaining accurate daily volatility forecasts, an intra‐day GARCH(1,1) model cannot be. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
This paper investigates the forecasting ability of four different GARCH models and the Kalman filter method. The four GARCH models applied are the bivariate GARCH, BEKK GARCH, GARCH-GJR and the GARCH-X model. The paper also compares the forecasting ability of the non-GARCH model: the Kalman method. Forecast errors based on 20 UK company daily stock return (based on estimated time-varying beta) forecasts are employed to evaluate out-of-sample forecasting ability of both GARCH models and Kalman method. Measures of forecast errors overwhelmingly support the Kalman filter approach. Among the GARCH models the GJR model appears to provide somewhat more accurate forecasts than the other bivariate GARCH models. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
This paper uses forecast combination methods to forecast output growth in a seven‐country quarterly economic data set covering 1959–1999, with up to 73 predictors per country. Although the forecasts based on individual predictors are unstable over time and across countries, and on average perform worse than an autoregressive benchmark, the combination forecasts often improve upon autoregressive forecasts. Despite the unstable performance of the constituent forecasts, the most successful combination forecasts, like the mean, are the least sensitive to the recent performance of the individual forecasts. While consistent with other evidence on the success of simple combination forecasts, this finding is difficult to explain using the theory of combination forecasting in a stationary environment. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
This paper proposes an algorithm that uses forecast encompassing tests for combining forecasts when there are a large number of forecasts that might enter the combination. The algorithm excludes a forecast from the combination if it is encompassed by another forecast. To assess the usefulness of this approach, an extensive empirical analysis is undertaken using a US macroeconomic dataset. The results are encouraging; the algorithm forecasts outperform benchmark model forecasts, in a mean square error (MSE) sense, in a majority of cases. The paper also compares the empirical performance of different approaches to forecast combination, and provides a rule‐of‐thumb cut‐off point for the thick‐modeling approach. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
In this paper we derive a test of predictability by exploring the possibility that forecasts from a given model, adjusted by a shrinkage factor, will display lower mean squared prediction errors than forecasts from a simple random walk. This generalizes most previous tests which compare forecast errors of a benchmark model with errors of a proposed alternative model, not allowing for shrinkage. We show that our test is a particular extension of a recently developed test of the martingale difference hypothesis. Using simulations we explore the behavior of our test in small and moderate samples. Numerical results indicate that the test has good size and power properties. Finally, we illustrate the use of our test in an empirical application within the exchange rate literature. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
We examine different approaches to forecasting monthly US employment growth in the presence of many potentially relevant predictors. We first generate simulated out‐of‐sample forecasts of US employment growth at multiple horizons using individual autoregressive distributed lag (ARDL) models based on 30 potential predictors. We then consider different methods from the extant literature for combining the forecasts generated by the individual ARDL models. Using the mean square forecast error (MSFE) metric, we investigate the performance of the forecast combining methods over the last decade, as well as five periods centered on the last five US recessions. Overall, our results show that a number of combining methods outperform a benchmark autoregressive model. Combining methods based on principal components exhibit the best overall performance, while methods based on simple averaging, clusters, and discount MSFE also perform well. On a cautionary note, some combining methods, such as those based on ordinary least squares, often perform quite poorly. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
Measurement errors can have dramatic impact on the outcome of empirical analysis. In this article we quantify the effects that they can have on predictions generated from ARMA processes. Lower and upper bounds are derived for differences in minimum mean squared prediction errors (MMSE) for forecasts generated from data with and without errors. The impact that measurement errors have on MMSE and other relative measures of forecast accuracy are presented for a variety of model structures and parameterizations. Based on these results the need to set up the models in state space form to extract the signal component appears to depend upon whether processes are nearly non‐invertible or non‐stationary or whether the noise‐to‐signal ratio is very high. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

8.
This paper discusses the use of preliminary data in econometric forecasting. The standard practice is to ignore the distinction between preliminary and final data, the forecasts that do so here being termed naïve forecasts. It is shown that in dynamic models a multistep‐ahead naïve forecast can achieve a lower mean square error than a single‐step‐ahead one, as it is less affected by the measurement noise embedded in the preliminary observations. The minimum mean square error forecasts are obtained by optimally combining the information provided by the model and the new information contained in the preliminary data, which can be done within the state space framework as suggested in numerous papers. Here two simple, in general suboptimal, methods of combining the two sources of information are considered: modifying the forecast initial conditions by means of standard regressions and using intercept corrections. The issues are explored using Italian national accounts data and the Bank of Italy Quarterly Econometric Model. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
This paper evaluates the performance of conditional variance models using high‐frequency data of the National Stock Index (S&P CNX NIFTY) and attempts to determine the optimal sampling frequency for the best daily volatility forecast. A linear combination of the realized volatilities calculated at two different frequencies is used as benchmark to evaluate the volatility forecasting ability of the conditional variance models (GARCH (1, 1)) at different sampling frequencies. From the analysis, it is found that sampling at 30 minutes gives the best forecast for daily volatility. The forecasting ability of these models is deteriorated, however, by the non‐normal property of mean adjusted returns, which is an assumption in conditional variance models. Nevertheless, the optimum frequency remained the same even in the case of different models (EGARCH and PARCH) and different error distribution (generalized error distribution, GED) where the error is reduced to a certain extent by incorporating the asymmetric effect on volatility. Our analysis also suggests that GARCH models with GED innovations or EGRACH and PARCH models would give better estimates of volatility with lower forecast error estimates. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
Since volatility is perceived as an explicit measure of risk, financial economists have long been concerned with accurate measures and forecasts of future volatility and, undoubtedly, the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model has been widely used for doing so. It appears, however, from some empirical studies that the GARCH model tends to provide poor volatility forecasts in the presence of additive outliers. To overcome the forecasting limitation, this paper proposes a robust GARCH model (RGARCH) using least absolute deviation estimation and introduces a valuable estimation method from a practical point of view. Extensive Monte Carlo experiments substantiate our conjectures. As the magnitude of the outliers increases, the one‐step‐ahead forecasting performance of the RGARCH model has a more significant improvement in two forecast evaluation criteria over both the standard GARCH and random walk models. Strong evidence in favour of the RGARCH model over other competitive models is based on empirical application. By using a sample of two daily exchange rate series, we find that the out‐of‐sample volatility forecasts of the RGARCH model are apparently superior to those of other competitive models. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
This study establishes a benchmark for short‐term salmon price forecasting. The weekly spot price of Norwegian farmed Atlantic salmon is predicted 1–5 weeks ahead using data from 2007 to 2014. Sixteen alternative forecasting methods are considered, ranging from classical time series models to customized machine learning techniques to salmon futures prices. The best predictions are delivered by k‐nearest neighbors method for 1 week ahead; vector error correction model estimated using elastic net regularization for 2 and 3 weeks ahead; and futures prices for 4 and 5 weeks ahead. While the nominal gains in forecast accuracy over a naïve benchmark are small, the economic value of the forecasts is considerable. Using a simple trading strategy for timing the sales based on price forecasts could increase the net profit of a salmon farmer by around 7%.  相似文献   

13.
Volatility plays a key role in asset and portfolio management and derivatives pricing. As such, accurate measures and good forecasts of volatility are crucial for the implementation and evaluation of asset and derivative pricing models in addition to trading and hedging strategies. However, whilst GARCH models are able to capture the observed clustering effect in asset price volatility in‐sample, they appear to provide relatively poor out‐of‐sample forecasts. Recent research has suggested that this relative failure of GARCH models arises not from a failure of the model but a failure to specify correctly the ‘true volatility’ measure against which forecasting performance is measured. It is argued that the standard approach of using ex post daily squared returns as the measure of ‘true volatility’ includes a large noisy component. An alternative measure for ‘true volatility’ has therefore been suggested, based upon the cumulative squared returns from intra‐day data. This paper implements that technique and reports that, in a dataset of 17 daily exchange rate series, the GARCH model outperforms smoothing and moving average techniques which have been previously identified as providing superior volatility forecasts. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

14.
Using the 'standard' approach to forecasting in the vector autoregressive moving average model, we establish basic general results on exact finite sample forecasts and their mean squared error matrices. Comparison between the exact and conditional methods of initiating the finite sample forecast calculations is presented, and a few illustrative cases are given.  相似文献   

15.
Consider a time series transformed by an instantaneous power function of the Box-Cox type. For a wide range of fractional powers, this paper gives the relative bias in original metric forecasts due to use of the simple inverse retransformation when minimum mean squared error (conditional mean) forecasts are optimal. This bias varies widely according to the characteristics of the data. A fast algorithm is given to find this bias, or to find minimum mean squared error forecasts in the original metric. The results depend on the assumption that the forecast errors in the transformed metric are Gaussian. An example using real data is given.  相似文献   

16.
Volatility models such as GARCH, although misspecified with respect to the data‐generating process, may well generate volatility forecasts that are unconditionally unbiased. In other words, they generate variance forecasts that, on average, are equal to the integrated variance. However, many applications in finance require a measure of return volatility that is a non‐linear function of the variance of returns, rather than of the variance itself. Even if a volatility model generates forecasts of the integrated variance that are unbiased, non‐linear transformations of these forecasts will be biased estimators of the same non‐linear transformations of the integrated variance because of Jensen's inequality. In this paper, we derive an analytical approximation for the unconditional bias of estimators of non‐linear transformations of the integrated variance. This bias is a function of the volatility of the forecast variance and the volatility of the integrated variance, and depends on the concavity of the non‐linear transformation. In order to estimate the volatility of the unobserved integrated variance, we employ recent results from the realized volatility literature. As an illustration, we estimate the unconditional bias for both in‐sample and out‐of‐sample forecasts of three non‐linear transformations of the integrated standard deviation of returns for three exchange rate return series, where a GARCH(1, 1) model is used to forecast the integrated variance. Our estimation results suggest that, in practice, the bias can be substantial. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
The period of extraordinary volatility in euro area headline inflation starting in 2007 raised the question whether forecast combination methods can be used to hedge against bad forecast performance of single models during such periods and provide more robust forecasts. We investigate this issue for forecasts from a range of short‐term forecasting models. Our analysis shows that there is considerable variation of the relative performance of the different models over time. To take that into account we suggest employing performance‐based forecast combination methods—in particular, one with more weight on the recent forecast performance. We compare such an approach with equal forecast combination that has been found to outperform more sophisticated forecast combination methods in the past, and investigate whether it can improve forecast accuracy over the single best model. The time‐varying weights assign weights to the economic interpretations of the forecast stemming from different models. We also include a number of benchmark models in our analysis. The combination methods are evaluated for HICP headline inflation and HICP excluding food and energy. We investigate how forecast accuracy of the combination methods differs between pre‐crisis times, the period after the global financial crisis and the full evaluation period, including the global financial crisis with its extraordinary volatility in inflation. Overall, we find that forecast combination helps hedge against bad forecast performance and that performance‐based weighting outperforms simple averaging. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

18.
This paper applies a triple‐choice ordered probit model, corrected for nonstationarity to forecast monetary decisions of the Reserve Bank of Australia. The forecast models incorporate a mix of monthly and quarterly macroeconomic time series. Forecast combination is used as an alternative to one multivariate model to improve accuracy of out‐of‐sample forecasts. This accuracy is evaluated with scoring functions, which are also used to construct adaptive weights for combining probability forecasts. This paper finds that combined forecasts outperform multivariable models. These results are robust to different sample sizes and estimation windows. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
Forecasters commonly predict real gross domestic product growth from monthly indicators such as industrial production, retail sales and surveys, and therefore require an assessment of the reliability of such tools. While forecast errors related to model specification and unavailability of data in real time have been assessed, the impact of data revisions on forecast accuracy has seldom been evaluated, especially for the euro area. This paper proposes to evaluate the contributions of these three sources of forecast error using a set of data vintages for the euro area. The results show that gains in accuracy of forecasts achieved by using monthly data on actual activity rather than surveys or financial indicators are offset by the fact that the former set of monthly data is harder to forecast and less timely than the latter set. These results provide a benchmark which future research may build on as more vintage datasets become available. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
Density forecasts for weather variables are useful for the many industries exposed to weather risk. Weather ensemble predictions are generated from atmospheric models and consist of multiple future scenarios for a weather variable. The distribution of the scenarios can be used as a density forecast, which is needed for pricing weather derivatives. We consider one to 10‐day‐ahead density forecasts provided by temperature ensemble predictions. More specifically, we evaluate forecasts of the mean and quantiles of the density. The mean of the ensemble scenarios is the most accurate forecast for the mean of the density. We use quantile regression to debias the quantiles of the distribution of the ensemble scenarios. The resultant quantile forecasts compare favourably with those from a GARCH model. These results indicate the strong potential for the use of ensemble prediction in temperature density forecasting. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号