首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock Exchange Composite Index and the five sectoral stock indices in Chinese stock markets for the period 4 January 2000 to 30 December 2011. We then conduct both in‐sample tests and out‐of‐sample forecasts to examine the effects of structural breaks on the performance of ARFIMAX‐FIGARCH models for the realized volatility forecast by utilizing a variety of estimation window sizes designed to accommodate potential structural breaks. The results of the in‐sample tests show that there are multiple breaks in all realized volatility series. The results of the out‐of‐sample point forecasts indicate that the combination forecasts with time‐varying weights across individual forecast models estimated with different estimation windows perform well. In particular, nonlinear combination forecasts with the weights chosen based on a non‐parametric kernel regression and linear combination forecasts with the weights chosen based on the non‐negative restricted least squares and Schwarz information criterion appear to be the most accurate methods in point forecasting for realized volatility under structural breaks. We also conduct an interval forecast of the realized volatility for the combination approaches, and find that the interval forecast for nonlinear combination approaches with the weights chosen according to a non‐parametric kernel regression performs best among the competing models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
Forecast combination based on a model selection approach is discussed and evaluated. In addition, a combination approach based on ex ante predictive ability is outlined. The model selection approach which we examine is based on the use of Schwarz (SIC) or the Akaike (AIC) Information Criteria. Monte Carlo experiments based on combination forecasts constructed using possibly (misspecified) models suggest that the SIC offers a potentially useful combination approach, and that further investigation is warranted. For example, combination forecasts from a simple averaging approach MSE‐dominate SIC combination forecasts less than 25% of the time in most cases, while other ‘standard’ combination approaches fare even worse. Alternative combination approaches are also compared by conducting forecasting experiments using nine US macroeconomic variables. In particular, artificial neural networks (ANN), linear models, and professional forecasts are used to form real‐time forecasts of the variables, and it is shown via a series of experiments that SIC, t‐statistic, and averaging combination approaches dominate various other combination approaches. An additional finding is that while ANN models may not MSE‐dominate simpler linear models, combinations of forecasts from these two models outperform either individual forecast, for a subset of the economic variables examined. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
Upon the evidence that infinite‐order vector autoregression setting is more realistic in time series models, we propose new model selection procedures for producing efficient multistep forecasts. They consist of order selection criteria involving the sample analog of the asymptotic approximation of the h‐step‐ahead forecast mean squared error matrix, where h is the forecast horizon. These criteria are minimized over a truncation order nT under the assumption that an infinite‐order vector autoregression can be approximated, under suitable conditions, with a sequence of truncated models, where nT is increasing with sample size. Using finite‐order vector autoregressive models with various persistent levels and realistic sample sizes, Monte Carlo simulations show that, overall, our criteria outperform conventional competitors. Specifically, they tend to yield better small‐sample distribution of the lag‐order estimates around the true value, while estimating it with relatively satisfactory probabilities. They also produce more efficient multistep (and even stepwise) forecasts since they yield the lowest h‐step‐ahead forecast mean squared errors for the individual components of the holding pseudo‐data to forecast. Thus estimating the actual autoregressive order as well as the best forecasting model can be achieved with the same selection procedure. Such results stand in sharp contrast to the belief that parsimony is a virtue in itself, and state that the relative accuracy of strongly consistent criteria such as the Schwarz information criterion, as claimed in the literature, is overstated. Our criteria are new tools extending those previously existing in the literature and hence can suitably be used for various practical situations when necessary. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
Financial data series are often described as exhibiting two non‐standard time series features. First, variance often changes over time, with alternating phases of high and low volatility. Such behaviour is well captured by ARCH models. Second, long memory may cause a slower decay of the autocorrelation function than would be implied by ARMA models. Fractionally integrated models have been offered as explanations. Recently, the ARFIMA–ARCH model class has been suggested as a way of coping with both phenomena simultaneously. For estimation we implement the bias correction of Cox and Reid ( 1987 ). For daily data on the Swiss 1‐month Euromarket interest rate during the period 1986–1989, the ARFIMA–ARCH (5,d,2/4) model with non‐integer d is selected by AIC. Model‐based out‐of‐sample forecasts for the mean are better than predictions based on conditionally homoscedastic white noise only for longer horizons (τ > 40). Regarding volatility forecasts, however, the selected ARFIMA–ARCH models dominate. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

5.
The paper examines combined forecasts based on two components: forecasts produced by Chase Econometrics and those produced using the Box-Jenkins ARIMA technique. Six series of quarterly ex ante and simulated ex ante forecasts are used over 37 time periods and ten horizons. The forecasts are combined using seven different methods. The best combined forecasts, judged by average relative root-mean-square error, are superior to the Chase forecasts for three variables and inferior for two, though averaged over all six variables the Chase forecasts are slightly better. A two-step procedure produces forecasts for the last half of the sample which, on average, are slightly better than the Chase forecasts.  相似文献   

6.
A forecasting model for yt based on its relationship to exogenous variables (e.g. x?t) must use x?t, the forecast of x?t. An example is given where commercially available x?t's are sufficiently inaccurate that a univariate model for yt appears preferable. For a variety of types of models inclusion of an exogenous variable x?t is shown to worsen the yt forecasts whenever x?t must itself be forecast by x?t and MSE (x?t) > Var (x?t). Tests with forecasts from a variety of sources indicate that, with a few notable exceptions, MSE (x?t) > Var (x?t) is common for macroeconomic forecasts more than a quarter or two ahead. Thus, either:
  • (a) available medium range forecasts for many macroeconomic variables (e.g. the GNP growth rate) are not an improvement over the sample mean (so that such variables are not useful explanatory variables in forecasting models), and/or
  • (b) the suboptimization involved in directly replacing x?t by x?t is a luxury that we cannot afford.
  相似文献   

7.
Testing the validity of value‐at‐risk (VaR) forecasts, or backtesting, is an integral part of modern market risk management and regulation. This is often done by applying independence and coverage tests developed by Christoffersen (International Economic Review, 1998; 39(4), 841–862) to so‐called hit‐sequences derived from VaR forecasts and realized losses. However, as pointed out in the literature, these aforementioned tests suffer from low rejection frequencies, or (empirical) power when applied to hit‐sequences derived from simulations matching empirical stylized characteristics of return data. One key observation of the studies is that higher‐order dependence in the hit‐sequences may cause the observed lower power performance. We propose to generalize the backtest framework for VaR forecasts, by extending the original first‐order dependence of Christoffersen to allow for a higher‐ or kth‐order dependence. We provide closed‐form expressions for the tests as well as asymptotic theory. Not only do the generalized tests have power against kth‐order dependence by definition, but also included simulations indicate improved power performance when replicating the aforementioned studies. Further, included simulations show much improved size properties of one of the suggested tests. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

8.
Econometric prediction accuracy for personal income forecasts is examined for a region of the United States. Previously published regional structural equation model (RSEM) forecasts exist ex ante for the state of New Mexico and its three largest metropolitan statistical areas: Albuquerque, Las Cruces and Santa Fe. Quarterly data between 1983 and 2000 are utilized at the state level. For Albuquerque, annual data from 1983 through 1999 are used. For Las Cruces and Santa Fe, annual data from 1990 through 1999 are employed. Univariate time series, vector autoregressions and random walks are used as the comparison criteria against structural equation simulations. Results indicate that ex ante RSEM forecasts achieved higher accuracy than those simulations associated with univariate ARIMA and random walk benchmarks for the state of New Mexico. The track records of the structural econometric models for Albuquerque, Las Cruces and Santa Fe are less impressive. In some cases, VAR benchmarks prove more reliable than RSEM income forecasts. In other cases, the RSEM forecasts are less accurate than random walk alternatives. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, an optimized multivariate singular spectrum analysis (MSSA) approach is proposed to find leading indicators of cross‐industry relations between 24 monthly, seasonally unadjusted industrial production (IP) series for German, French, and UK economies. Both recurrent and vector forecasting algorithms of horizontal MSSA (HMSSA) are considered. The results from the proposed multivariate approach are compared with those obtained via the optimized univariate singular spectrum analysis (SSA) forecasting algorithm to determine the statistical significance of each outcome. The data are rigorously tested for normality, seasonal unit root hypothesis, and structural breaks. The results are presented such that users can not only identify the most appropriate model based on the aim of the analysis, but also easily identify the leading indicators for each IP variable in each country. Our findings show that, for all three countries, forecasts from the proposed MSSA algorithm outperform the optimized SSA algorithm in over 70% of cases. Accordingly, this new approach succeeds in identifying leading indicators and is a viable option for selecting the SSA choices L and r, which minimizes a loss function.  相似文献   

10.
This paper examines the effects of combining three econometric and three times-series forecasts of growth and inflation in the U.K. If forecasts are unbiased then a combination exploiting this fact will be more efficient than an unrestricted combination. Ex post econometric forecasts may be biased but ex ante they are unbiased. The results of the study are that a restricted linear combination of the econometric forecasts is superior to an unrestricted combination and also to the unweighted mean of the forecasts. However, it is not preferred to the best of the individual forecasts.  相似文献   

11.
Most economic forecast evaluations dating back 20 years show that professional forecasters add little to the forecasts generated by the simplest of models. Using various types of forecast error criteria, these evaluations usually conclude that the professional forecasts are little better than the no-change or ARIM A type forecast. It is our contention that this conclusion is mistaken because the conventional error criteria may not capture why forecasts are ma& or how they are used. Using forecast directional accuracy, the criterion which has been found to be highly correlated with profits in an interest rate setting, we find that professional GNP forecasts dominate the cheaper alternatives. Moreover, there appears to be no systematic relationship between this preferred criterion and the error measures used in previous studies.  相似文献   

12.
System-based combination weights for series r/step-length h incorporate relative accuracy information from other forecast step-lengths for r and from other series for step-length h. Such weights are examined utilizing the West and Fullerton (1996) data set-4275 ex ante employment forecasts from structural simultaneous equation econometric models for 19 metropolitan areas at 10 quarterly step-lengths and a parallel set of 4275 ARIMA forecasts. The system-based weights yielded combined forecasts of higher average accuracy and lower risk of large inaccuracy than seven alternative strategies: (1) averaging; (2) relative MSE weights; (3) outperformance (per cent best) weights; (4) Bates and Granger (1969) optimal weights with a convexity constraint imposed; (5) unconstrained optimal weights; (6) select a ‘best’ method (ex ante) by series and; (7) experiment in the Bischoff (1989) sense and select either method (2) or (6) based on the outcome of e experiment. Accuracy gains of the system-based combination were concentrated at step-lengths two to five. Although alternative (5) was generally outperformed, none of the six other alternatives was systematically most accurate when evaluated relative to each other. This contrasts with Bischoff's (1989) results that held promise for an empirically applicable guideline to determine whether or not to combine.  相似文献   

13.
This paper analyses the size and nature of the errors in GDP forecasts in the G7 countries from 1971 to 1995. These GDP short‐term forecasts are produced by the Organization for Economic Cooperation and Development and by the International Monetary Fund, and published twice a year in the Economic Outlook and in the World Economic Outlook, respectively. The evaluation of the accuracy of the forecasts is based on the properties of the difference between the realization and the forecast. A forecast is considered to be accurate if it is unbiased and efficient. A forecast is unbiased if its average deviation from the outcome is zero, and it is efficient if it reflects all the information that is available at the time the forecast is made. Finally, we also examine tests of directional accuracy and offer a non‐parametric method of assessment. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

14.
Credibility models in actuarial science deal with multiple short time series where each series represents claim amounts of different insurance groups. Commonly used credibility models imply shrinkage of group-specific estimates towards their average. In this paper we model the claim size yu in group i and at time t as the sum of three independent components: yit = μr + δi + ?it. The first component, μt = μt?1 + mt, represents time-varying levels that are common to all groups. The second component, δi, represents random group offsets that are the same in all periods, and the third component represents independent measurement errors. In this paper we show how to obtain forecasts from this model and we discuss the nature of the forecasts, with particular emphasis on shrinkage. We also assess the forecast improvements that can be expected from such a model. Finally, we discuss an extension of the above model which also allows the group offsets to change over time. We assume that the offsets for different groups follow independent random walks.  相似文献   

15.
This paper is a counterfactual analysis investigating the consequences of the formation of a currency union for Canada and the USA: whether outputs increase and prices decrease if these countries form a currency union. We use a two‐country cointegrated model to conduct the counterfactual analysis, where the conditional forecasts are generated based on the Gaussian assumption. To deal with structural breaks and model uncertainty, conditional forecasts are generated from different models/estimation windows and the model‐averaging technique is used to combine the forecasts. We also examine the robustness of our results to parameter uncertainty using the wild bootstrap method. The results show that forming the currency union would probably boost the Canadian economy, whereas it would not have significant effects on US output or Canadian and US price levels. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper we present results of a simulation study to assess and compare the accuracy of forecasting techniques for long‐memory processes in small sample sizes. We analyse differences between adaptive ARMA(1,1) L‐step forecasts, where the parameters are estimated by minimizing the sum of squares of L‐step forecast errors, and forecasts obtained by using long‐memory models. We compare widths of the forecast intervals for both methods, and discuss some computational issues associated with the ARMA(1,1) method. Our results illustrate the importance and usefulness of long‐memory models for multi‐step forecasting. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

17.
While there is general agreement that a linear combination of forecasts can outperform the individual forecasts, there is controversy about the appropriateness of the combination method to be used in a given situation. Hence, in any given application it may be more beneficial to combine different sets of combined forecasts rather than picking one of them. This paper introduces the concept of N-step combinations of forecasts which involves combining the combined forecasts obtained from different combination procedures used at the preceding step. Using quarterly GNP data, evidence supporting the increase in the accuracy of the one-period-ahead ex-ante forecasts as the combination step increases is provided. The MSE, MAE, MAPE and their corresponding standard deviations are used to evaluate the accuracy of the forecasts obtained.  相似文献   

18.
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper we apply cointegration and Granger-causality analyses to construct linear and neural network error-correction models for an Austrian Initial Public Offerings IndeX (IPOXATX). We use the significant relationship between the IPOXATX and the Austrian Stock Market Index ATX to forecast the IPOXATX. For prediction purposes we apply augmented feedforward neural networks whose architecture is determined by Sequential Network Construction with the Schwartz Information Criterion as an estimator for the prediction risk. Trading based on the forecasts yields results superior to Buy and Hold or Moving Average trading strategies in terms of mean-variance considerations.  相似文献   

20.
This article stresses how little is known about the quality, particularly the relative quality, of macroeconometric models. Most economists make a strict distinction between the quality of a model per se and the accuracy of solutions based on that model. While this distinction is valid, it leaves unanswered how to compare the‘validity’of conditional models. The standard test, the accuracy of ex post simulations, is not definitive when models with differing degrees of exogeneity are compared. In addition, it is extremely difficult to estimate the relative quantitative importance of conceptual problems of models, such as parameter instability across‘policy regimes’ In light of the difficulty in comparisons of conditional macroeconometric models, many model-builders and users assume that the best models are those that have been used to make the most accurate forecasts are those made with the best models. Forecasting experience indicates that forecasters using macroeconometric models have produced more accurate macroeconomic forecasts than either naive or sophisticated unconditional statistical models. It also suggests that judgementally adjusted forecasts have been more accurate than model-based forecasts generated mechanically. The influence of econometrically-based forecasts is now so pervasive that it is difficult to find examples of‘purely judgemental’forecasts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号