首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A widely used approach to evaluating volatility forecasts uses a regression framework which measures the bias and variance of the forecast. We show that the associated test for bias is inappropriate before introducing a more suitable procedure which is based on the test for bias in a conditional mean forecast. Although volatility has been the most common measure of the variability in a financial time series, in many situations confidence interval forecasts are required. We consider the evaluation of interval forecasts and present a regression‐based procedure which uses quantile regression to assess quantile estimator bias and variance. We use exchange rate data to illustrate the proposal by evaluating seven quantile estimators, one of which is a new non‐parametric autoregressive conditional heteroscedasticity quantile estimator. The empirical analysis shows that the new evaluation procedure provides useful insight into the quality of quantile estimators. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

2.
The vector multiplicative error model (vector MEM) is capable of analyzing and forecasting multidimensional non‐negative valued processes. Usually its parameters are estimated by generalized method of moments (GMM) and maximum likelihood (ML) methods. However, the estimations could be heavily affected by outliers. To overcome this problem, in this paper an alternative approach, the weighted empirical likelihood (WEL) method, is proposed. This method uses moment conditions as constraints and the outliers are detected automatically by performing a k‐means clustering on Oja depth values of innovations. The performance of WEL is evaluated against those of GMM and ML methods through extensive simulations, in which three different kinds of additive outliers are considered. Moreover, the robustness of WEL is demonstrated by comparing the volatility forecasts of the three methods on 10‐minute returns of the S&P 500 index. The results from both the simulations and the S&P 500 volatility forecasts have shown preferences in using the WEL method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
This paper examines small sample properties of alternative bias‐corrected bootstrap prediction regions for the vector autoregressive (VAR) model. Bias‐corrected bootstrap prediction regions are constructed by combining bias‐correction of VAR parameter estimators with the bootstrap procedure. The backward VAR model is used to bootstrap VAR forecasts conditionally on past observations. Bootstrap prediction regions based on asymptotic bias‐correction are compared with those based on bootstrap bias‐correction. Monte Carlo simulation results indicate that bootstrap prediction regions based on asymptotic bias‐correction show better small sample properties than those based on bootstrap bias‐correction for nearly all cases considered. The former provide accurate coverage properties in most cases, while the latter over‐estimate the future uncertainty. Overall, the percentile‐t bootstrap prediction region based on asymptotic bias‐correction is found to provide highly desirable small sample properties, outperforming its alternatives in nearly all cases. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
Testing the validity of value‐at‐risk (VaR) forecasts, or backtesting, is an integral part of modern market risk management and regulation. This is often done by applying independence and coverage tests developed by Christoffersen (International Economic Review, 1998; 39(4), 841–862) to so‐called hit‐sequences derived from VaR forecasts and realized losses. However, as pointed out in the literature, these aforementioned tests suffer from low rejection frequencies, or (empirical) power when applied to hit‐sequences derived from simulations matching empirical stylized characteristics of return data. One key observation of the studies is that higher‐order dependence in the hit‐sequences may cause the observed lower power performance. We propose to generalize the backtest framework for VaR forecasts, by extending the original first‐order dependence of Christoffersen to allow for a higher‐ or kth‐order dependence. We provide closed‐form expressions for the tests as well as asymptotic theory. Not only do the generalized tests have power against kth‐order dependence by definition, but also included simulations indicate improved power performance when replicating the aforementioned studies. Further, included simulations show much improved size properties of one of the suggested tests. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

5.
Volatility plays a key role in asset and portfolio management and derivatives pricing. As such, accurate measures and good forecasts of volatility are crucial for the implementation and evaluation of asset and derivative pricing models in addition to trading and hedging strategies. However, whilst GARCH models are able to capture the observed clustering effect in asset price volatility in‐sample, they appear to provide relatively poor out‐of‐sample forecasts. Recent research has suggested that this relative failure of GARCH models arises not from a failure of the model but a failure to specify correctly the ‘true volatility’ measure against which forecasting performance is measured. It is argued that the standard approach of using ex post daily squared returns as the measure of ‘true volatility’ includes a large noisy component. An alternative measure for ‘true volatility’ has therefore been suggested, based upon the cumulative squared returns from intra‐day data. This paper implements that technique and reports that, in a dataset of 17 daily exchange rate series, the GARCH model outperforms smoothing and moving average techniques which have been previously identified as providing superior volatility forecasts. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

6.
In this paper we present results of a simulation study to assess and compare the accuracy of forecasting techniques for long‐memory processes in small sample sizes. We analyse differences between adaptive ARMA(1,1) L‐step forecasts, where the parameters are estimated by minimizing the sum of squares of L‐step forecast errors, and forecasts obtained by using long‐memory models. We compare widths of the forecast intervals for both methods, and discuss some computational issues associated with the ARMA(1,1) method. Our results illustrate the importance and usefulness of long‐memory models for multi‐step forecasting. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

7.
We propose a method approach. We use six international stock price indices and three hypothetical portfolios formed by these indices. The sample was observed daily from 1 January 1996 to 31 December 2006. Confirmed by the failure rates and backtesting developed by Kupiec (Technique for verifying the accuracy of risk measurement models. Journal of Derivatives 1995; 3 : 73–84) and Christoffersen (Evaluating interval forecasts. International Economic Review 1998; 39 : 841–862), the empirical results show that our method can considerably improve the estimation accuracy of value‐at‐risk. Thus the study establishes an effective alternative model for risk prediction and hence also provides a reliable tool for the management of portfolios. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
The track record of a 20‐year history of density forecasts of state tax revenue in Iowa is studied, and potential improvements sought through a search for better‐performing ‘priors’ similar to that conducted three decades ago for point forecasts by Doan, Litterman and Sims (Econometric Reviews, 1984). Comparisons of the point and density forecasts produced under the flat prior are made to those produced by the traditional (mixed estimation) ‘Bayesian VAR’ methods of Doan, Litterman and Sims, as well as to fully Bayesian ‘Minnesota Prior’ forecasts. The actual record and, to a somewhat lesser extent, the record of the alternative procedures studied in pseudo‐real‐time forecasting experiments, share a characteristic: subsequently realized revenues are in the lower tails of the predicted distributions ‘too often’. An alternative empirically based prior is found by working directly on the probability distribution for the vector autoregression parameters—the goal being to discover a better‐performing entropically tilted prior that minimizes out‐of‐sample mean squared error subject to a Kullback–Leibler divergence constraint that the new prior not differ ‘too much’ from the original. We also study the closely related topic of robust prediction appropriate for situations of ambiguity. Robust ‘priors’ are competitive in out‐of‐sample forecasting; despite the freedom afforded the entropically tilted prior, it does not perform better than the simple alternatives. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
Upon the evidence that infinite‐order vector autoregression setting is more realistic in time series models, we propose new model selection procedures for producing efficient multistep forecasts. They consist of order selection criteria involving the sample analog of the asymptotic approximation of the h‐step‐ahead forecast mean squared error matrix, where h is the forecast horizon. These criteria are minimized over a truncation order nT under the assumption that an infinite‐order vector autoregression can be approximated, under suitable conditions, with a sequence of truncated models, where nT is increasing with sample size. Using finite‐order vector autoregressive models with various persistent levels and realistic sample sizes, Monte Carlo simulations show that, overall, our criteria outperform conventional competitors. Specifically, they tend to yield better small‐sample distribution of the lag‐order estimates around the true value, while estimating it with relatively satisfactory probabilities. They also produce more efficient multistep (and even stepwise) forecasts since they yield the lowest h‐step‐ahead forecast mean squared errors for the individual components of the holding pseudo‐data to forecast. Thus estimating the actual autoregressive order as well as the best forecasting model can be achieved with the same selection procedure. Such results stand in sharp contrast to the belief that parsimony is a virtue in itself, and state that the relative accuracy of strongly consistent criteria such as the Schwarz information criterion, as claimed in the literature, is overstated. Our criteria are new tools extending those previously existing in the literature and hence can suitably be used for various practical situations when necessary. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
P. C. B. Phillips (1998) demonstrated that deterministic trends are a valid representation of an otherwise stochastic trending mechanism; he remained skeptic, however, about the predictive power of such representations. In this paper we prove that forecasts built upon spurious regression may perform (asymptotically) as well as those issued from a correctly specified regression. We derive the order in probability of several in‐sample and out‐of‐sample predictability criteria ( test, root mean square error, Theil's U‐statistics and R2) using forecasts based upon a least squares‐estimated regression between independent variables generated by a variety of empirically relevant data‐generating processes. It is demonstrated that, when the variables are mean stationary or trend stationary, the order in probability of these criteria is the same whether the regression is spurious or not. Simulation experiments confirm our asymptotic results. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock Exchange Composite Index and the five sectoral stock indices in Chinese stock markets for the period 4 January 2000 to 30 December 2011. We then conduct both in‐sample tests and out‐of‐sample forecasts to examine the effects of structural breaks on the performance of ARFIMAX‐FIGARCH models for the realized volatility forecast by utilizing a variety of estimation window sizes designed to accommodate potential structural breaks. The results of the in‐sample tests show that there are multiple breaks in all realized volatility series. The results of the out‐of‐sample point forecasts indicate that the combination forecasts with time‐varying weights across individual forecast models estimated with different estimation windows perform well. In particular, nonlinear combination forecasts with the weights chosen based on a non‐parametric kernel regression and linear combination forecasts with the weights chosen based on the non‐negative restricted least squares and Schwarz information criterion appear to be the most accurate methods in point forecasting for realized volatility under structural breaks. We also conduct an interval forecast of the realized volatility for the combination approaches, and find that the interval forecast for nonlinear combination approaches with the weights chosen according to a non‐parametric kernel regression performs best among the competing models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
This paper proposes an adjustment of linear autoregressive conditional mean forecasts that exploits the predictive content of uncorrelated model residuals. The adjustment is motivated by non‐Gaussian characteristics of model residuals, and implemented in a semiparametric fashion by means of conditional moments of simulated bivariate distributions. A pseudo ex ante forecasting comparison is conducted for a set of 494 macroeconomic time series recently collected by Dees et al. (Journal of Applied Econometrics 2007; 22: 1–38). In total, 10,374 time series realizations are contrasted against competing short‐, medium‐ and longer‐term purely autoregressive and adjusted predictors. With regard to all forecast horizons, the adjusted predictions consistently outperform conditionally Gaussian forecasts according to cross‐sectional mean group evaluation of absolute forecast errors and directional accuracy. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
This paper analyses the size and nature of the errors in GDP forecasts in the G7 countries from 1971 to 1995. These GDP short‐term forecasts are produced by the Organization for Economic Cooperation and Development and by the International Monetary Fund, and published twice a year in the Economic Outlook and in the World Economic Outlook, respectively. The evaluation of the accuracy of the forecasts is based on the properties of the difference between the realization and the forecast. A forecast is considered to be accurate if it is unbiased and efficient. A forecast is unbiased if its average deviation from the outcome is zero, and it is efficient if it reflects all the information that is available at the time the forecast is made. Finally, we also examine tests of directional accuracy and offer a non‐parametric method of assessment. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

14.
We propose a new class of limited information estimators built upon an explicit trade‐off between data fitting and a priori model specification. The estimators offer the researcher a continuum of estimators that range from an extreme emphasis on data fitting and robust reduced‐form estimation to the other extreme of exact model specification and efficient estimation. The approach used to generate the estimators illustrates why ULS often outperforms 2SLS‐PRRF even in the context of a correctly specified model, provides a new interpretation of 2SLS, and integrates Wonnacott and Wonnacott's (1970) least weighted variance estimators with other techniques. We apply the new class of estimators to Klein's Model I and generate forecasts. We find for this example that an emphasis on specification (as opposed to data fitting) produces better out‐of‐sample predictions. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

15.
Multifractal models have recently been introduced as a new type of data‐generating process for asset returns and other financial data. Here we propose an adaptation of this model for realized volatility. We estimate this new model via generalized method of moments and perform forecasting by means of best linear forecasts derived via the Levinson–Durbin algorithm. Its out‐of‐sample performance is compared against other popular time series specifications. Using an intra‐day dataset for five major international stock market indices, we find that the the multifractal model for realized volatility improves upon forecasts of its earlier counterparts based on daily returns and of many other volatility models. While the more traditional RV‐ARFIMA model comes out as the most successful model (in terms of the number of cases in which it has the best forecasts for all combinations of forecast horizons and evaluation criteria), the new model performs often significantly better during the turbulent times of the recent financial crisis. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

16.
A sample‐based method in Kolsrud (Journal of Forecasting 2007; 26 (3): 171–188) for the construction of a time‐simultaneous prediction band for a univariate time series is extended to produce a variable‐ and time‐simultaneous prediction box for a multivariate time series. A measure of distance based on the L ‐norm is applied to a learning sample of multivariate time trajectories, which can be mean‐ and/or variance‐nonstationary. Based on the ranking of distances to the centre of the sample, a subsample of the most central multivariate trajectories is selected. A prediction box is constructed by circumscribing the subsample with a hyperrectangle. The fraction of central trajectories selected into the subsample can be calibrated by bootstrap such that the expected coverage of the box equals a prescribed nominal level. The method is related to the concept of data depth, and thence modified to increase coverage. Applications to simulated and empirical data illustrate the method, which is also compared to several other methods in the literature adapted to the multivariate setting. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper we introduce a new testing procedure for evaluating the rationality of fixed‐event forecasts based on a pseudo‐maximum likelihood estimator. The procedure is designed to be robust to departures in the normality assumption. A model is introduced to show that such departures are likely when forecasters experience a credibility loss when they make large changes to their forecasts. The test is illustrated using monthly fixed‐event forecasts produced by four UK institutions. Use of the robust test leads to the conclusion that certain forecasts are rational while use of the Gaussian‐based test implies that certain forecasts are irrational. The difference in the results is due to the nature of the underlying data. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

18.
We evaluate forecasting models of US business fixed investment spending growth over the recent 1995:1–2004:2 out‐of‐sample period. The forecasting models are based on the conventional Accelerator, Neoclassical, Average Q, and Cash‐Flow models of investment spending, as well as real stock prices and excess stock return predictors. The real stock price model typically generates the most accurate forecasts, and forecast‐encompassing tests indicate that this model contains most of the information useful for forecasting investment spending growth relative to the other models at longer horizons. In a robustness check, we also evaluate the forecasting performance of the models over two alternative out‐of‐sample periods: 1975:1–1984:4 and 1985:1–1994:4. A number of different models produce the most accurate forecasts over these alternative out‐of‐sample periods, indicating that while the real stock price model appears particularly useful for forecasting the recent behavior of investment spending growth, it may not continue to perform well in future periods. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
We compare linear autoregressive (AR) models and self‐exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two‐regime SETAR process is used as the data‐generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non‐linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
This study addresses for the first time systematic evaluation of a widely used class of forecasts, regional economic forecasts. Ex ante regional structural equation model forecasts are analysed for 19 metropolitan areas. One- to ten-quarter-ahead forecasts are considered and the seven-year sample spans a complete business cycle. Counter to previous speculation in the literature, (1) dependency on macroeconomic forecasting model inputs does not substantially erode accuracy relative to univariate extrapolative methodologies and (2) stochastic time series models do not on average, yield more accurate regional economic predictions than structural models. Similar to findings in other studies, clear preferences among extrapolative methodologies do not emerge. Most general conclusions, however, are subject to caveats based on step-length effects and region-specific effects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号