首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent research has suggested that forecast evaluation on the basis of standard statistical loss functions could prefer models which are sub‐optimal when used in a practical setting. This paper explores a number of statistical models for predicting the daily volatility of several key UK financial time series. The out‐of‐sample forecasting performance of various linear and GARCH‐type models of volatility are compared with forecasts derived from a multivariate approach. The forecasts are evaluated using traditional metrics, such as mean squared error, and also by how adequately they perform in a modern risk management setting. We find that the relative accuracies of the various methods are highly sensitive to the measure used to evaluate them. Such results have implications for any econometric time series forecasts which are subsequently employed in financial decision making. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
A short‐term mixed‐frequency model is proposed to estimate and forecast Italian economic activity fortnightly. We introduce a dynamic one‐factor model with three frequencies (quarterly, monthly, and fortnightly) by selecting indicators that show significant coincident and leading properties and are representative of both demand and supply. We conduct an out‐of‐sample forecasting exercise and compare the prediction errors of our model with those of alternative models that do not include fortnightly indicators. We find that high‐frequency indicators significantly improve the real‐time forecasts of Italian gross domestic product (GDP); this result suggests that models exploiting the information available at different lags and frequencies provide forecasting gains beyond those based on monthly variables alone. Moreover, the model provides a new fortnightly indicator of GDP, consistent with the official quarterly series.  相似文献   

3.
This paper uses the dynamic factor model framework, which accommodates a large cross‐section of macroeconomic time series, for forecasting regional house price inflation. In this study, we forecast house price inflation for five metropolitan areas of South Africa using principal components obtained from 282 quarterly macroeconomic time series in the period 1980:1 to 2006:4. The results, based on the root mean square errors of one to four quarters ahead out‐of‐sample forecasts over the period 2001:1 to 2006:4 indicate that, in the majority of the cases, the Dynamic Factor Model statistically outperforms the vector autoregressive models, using both the classical and the Bayesian treatments. We also consider spatial and non‐spatial specifications. Our results indicate that macroeconomic fundamentals in forecasting house price inflation are important. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
Many publications on tourism forecasting have appeared during the past twenty years. The purpose of this article is to organize and summarize that scattered literature. General conclusions are also drawn from the studies to help those wishing to develop tourism forecasts of their own. The forecasting techniques discussed include time series models, econometric causal models, the gravity model and expert-opinion techniques. The major conclusions are that time series models are the simplest and least costly (and therefore most appropriate for practitioners); the gravity model is best suited to handle international tourism flows (and will be most useful to governments and tourism agencies); and expert-opinion methods are useful when data are unavailable. Further research is needed on the use of economic indicators in tourism forecasting, on the development of attractivity and emissiveness indexes for use in gravity and econometric models and on empirical comparisons among the different methods.  相似文献   

5.
It is widely recognized that taking cointegration relationships into consideration is useful in forecasting cointegrated processes. However, there are a few practical problems when forecasting large cointegrated processes using the well‐known vector error correction model. First, it is hard to identify the cointegration rank in large models. Second, since the number of parameters to be estimated tends to be large relative to the sample size in large models, estimators will have large standard errors, and so will forecasts. The purpose of the present paper is to propose a new procedure for forecasting large cointegrated processes which is free from the above problems. In our Monte Carlo experiment, we find that our forecast gains accuracy when we work with a larger model as long as the ratio of the cointegration rank to the number of variables in the process is high. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
We develop a small model for forecasting inflation for the euro area using quarterly data over the period June 1973 to March 1999. The model is used to provide inflation forecasts from June 1999 to March 2002. We compare the forecasts from our model with those derived from six competing forecasting models, including autoregressions, vector autoregressions and Phillips‐curve based models. A considerable gain in forecasting performance is demonstrated using a relative root mean squared error criterion and the Diebold–Mariano test to make forecast comparisons. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
This paper discusses the forecasting performance of alternative factor models based on a large panel of quarterly time series for the German economy. One model extracts factors by static principal components analysis; the second model is based on dynamic principal components obtained using frequency domain methods; the third model is based on subspace algorithms for state‐space models. Out‐of‐sample forecasts show that the forecast errors of the factor models are on average smaller than the errors of a simple autoregressive benchmark model. Among the factor models, the dynamic principal component model and the subspace factor model outperform the static factor model in most cases in terms of mean‐squared forecast error. However, the forecast performance depends crucially on the choice of appropriate information criteria for the auxiliary parameters of the models. In the case of misspecification, rankings of forecast performance can change severely. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

8.
A large number of models have been developed in the literature to analyze and forecast changes in output dynamics. The objective of this paper was to compare the predictive ability of univariate and bivariate models, in terms of forecasting US gross national product (GNP) growth at different forecasting horizons, with the bivariate models containing information on a measure of economic uncertainty. Based on point and density forecast accuracy measures, as well as on equal predictive ability (EPA) and superior predictive ability (SPA) tests, we evaluate the relative forecasting performance of different model specifications over the quarterly period of 1919:Q2 until 2014:Q4. We find that the economic policy uncertainty (EPU) index should improve the accuracy of US GNP growth forecasts in bivariate models. We also find that the EPU exhibits similar forecasting ability to the term spread and outperforms other uncertainty measures such as the volatility index and geopolitical risk in predicting US recessions. While the Markov switching time‐varying parameter vector autoregressive model yields the lowest values for the root mean squared error in most cases, we observe relatively low values for the log predictive density score, when using the Bayesian vector regression model with stochastic volatility. More importantly, our results highlight the importance of uncertainty in forecasting US GNP growth rates.  相似文献   

9.
The increase in oil price volatility in recent years has raised the importance of forecasting it accurately for valuing and hedging investments. The paper models and forecasts the crude oil exchange‐traded funds (ETF) volatility index, which has been used in the last years as an important alternative measure to track and analyze the volatility of future oil prices. Analysis of the oil volatility index suggests that it presents features similar to those of the daily market volatility index, such as long memory, which is modeled using well‐known heterogeneous autoregressive (HAR) specifications and new extensions that are based on net and scaled measures of oil price changes. The aim is to improve the forecasting performance of the traditional HAR models by including predictors that capture the impact of oil price changes on the economy. The performance of the new proposals and benchmarks is evaluated with the model confidence set (MCS) and the Generalized‐AutoContouR (G‐ACR) tests in terms of point forecasts and density forecasting, respectively. We find that including the leverage in the conditional mean or variance of the basic HAR model increases its predictive ability. Furthermore, when considering density forecasting, the best models are a conditional heteroskedastic HAR model that includes a scaled measure of oil price changes, and a HAR model with errors following an exponential generalized autoregressive conditional heteroskedasticity specification. In both cases, we consider a flexible distribution for the errors of the conditional heteroskedastic process.  相似文献   

10.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper we present results of a simulation study to assess and compare the accuracy of forecasting techniques for long‐memory processes in small sample sizes. We analyse differences between adaptive ARMA(1,1) L‐step forecasts, where the parameters are estimated by minimizing the sum of squares of L‐step forecast errors, and forecasts obtained by using long‐memory models. We compare widths of the forecast intervals for both methods, and discuss some computational issues associated with the ARMA(1,1) method. Our results illustrate the importance and usefulness of long‐memory models for multi‐step forecasting. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper we compare the in‐sample fit and out‐of‐sample forecasting performance of no‐arbitrage quadratic, essentially affine and dynamic Nelson–Siegel term structure models. In total, 11 model variants are evaluated, comprising five quadratic, four affine and two Nelson–Siegel models. Recursive re‐estimation and out‐of‐sample 1‐, 6‐ and 12‐month‐ahead forecasts are generated and evaluated using monthly US data for yields observed at maturities of 1, 6, 12, 24, 60 and 120 months. Our results indicate that quadratic models provide the best in‐sample fit, while the best out‐of‐sample performance is generated by three‐factor affine models and the dynamic Nelson–Siegel model variants. Statistical tests fail to identify one single best forecasting model class. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
This study investigates whether human judgement can be of value to users of industrial learning curves, either alone or in conjunction with statistical models. In a laboratory setting, it compares the forecast accuracy of a statistical model and judgemental forecasts, contingent on three factors: the amount of data available prior to forecasting, the forecasting horizon, and the availability of a decision aid (projections from a fitted learning curve). The results indicate that human judgement was better than the curve forecasts overall. Despite their lack of field experience with learning curve use, 52 of the 79 subjects outperformed the curve on the set of 120 forecasts, based on mean absolute percentage error. Human performance was statistically superior to the model when few data points were available and when forecasting further into the future. These results indicate substantial potential for human judgement to improve predictive accuracy in the industrial learning‐curve context. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

15.
The purpose of this study is first, to demonstrate how multivariate forecasting models can be effectively used to generate high performance forecasts for typical business applications. Second, this study compares the forecasts generated by a simultaneous transfer function model (STF) model and a white noise regression model with that of a univariate ARIMA model. The accuracy of these forecasting models is judged using their residual variances and forecasting errors in a post-sample period. It is found that ignoring the residual serial correlation can greatly degrade the forecasting performance of a multi-variable model, and in some situations, cause a multi-variable model to perform inferior to a univariate ARIMA model. This paper also demonstrates how a forecaster can use an STF model to compute both the multi-step ahead forecasts and their variances easily.  相似文献   

16.
A procedure for estimating state space models for multivariate distributed lag processes is described. It involves singular value decomposition techniques and yields an internally balanced state space representation which has attractive properties. Following the specifications of a forecasting competition, the approach is applied to generate ex-post forecasts for US real GNP growth rates. The forecasts of the estimated state space model are compared to those of twelve econometric models and an ARIMA model.  相似文献   

17.
The purpose of this paper is to investigate the applicability of a contemporary time series forecasting technique, transfer function modeling, to the problem of forecasting sectoral employment levels in small regional economies. The specific sectoral employment levels to be forecast are manufacturing, durable manufacturing, non-durable manufacturing and non-manufacturing employment. Due to data constraints at the small region level, construction of traditional causal econometric models is often very difficult; thus time series approaches become particularly attractive. The results suggest that transfer function models using readily available national indicator series as drivers can provide more accurate forecasts of small region sectoral employment levels than univariate time series models.  相似文献   

18.
Longevity risk has become one of the major risks facing the insurance and pensions markets globally. The trade in longevity risk is underpinned by accurate forecasting of mortality rates. Using techniques from macroeconomic forecasting we propose a dynamic factor model of mortality that fits and forecasts age‐specific mortality rates parsimoniously. We compare the forecasting quality of this model against the Lee–Carter model and its variants. Our results show the dynamic factor model generally provides superior forecasts when applied to international mortality data. We also show that existing multifactorial models have superior fit but their forecasting performance worsens as more factors are added. The dynamic factor approach used here can potentially be further improved upon by applying an appropriate stopping rule for the number of static and dynamic factors. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
We investigate the forecasting ability of the most commonly used benchmarks in financial economics. We approach the usual caveats of probabilistic forecasts studies—small samples, limited models, and nonholistic validations—by performing a comprehensive comparison of 15 predictive schemes during a time period of over 21 years. All densities are evaluated in terms of their statistical consistency, local accuracy and forecasting errors. Using a new composite indicator, the integrated forecast score, we show that risk‐neutral densities outperform historical‐based predictions in terms of information content. We find that the variance gamma model generates the highest out‐of‐sample likelihood of observed prices and the lowest predictive errors, whereas the GARCH‐based GJR‐FHS delivers the most consistent forecasts across the entire density range. In contrast, lognormal densities, the Heston model, or the nonparametric Breeden–Litzenberger formula yield biased predictions and are rejected in statistical tests.  相似文献   

20.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号