首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 663 毫秒
1.
Volatility plays a key role in asset and portfolio management and derivatives pricing. As such, accurate measures and good forecasts of volatility are crucial for the implementation and evaluation of asset and derivative pricing models in addition to trading and hedging strategies. However, whilst GARCH models are able to capture the observed clustering effect in asset price volatility in‐sample, they appear to provide relatively poor out‐of‐sample forecasts. Recent research has suggested that this relative failure of GARCH models arises not from a failure of the model but a failure to specify correctly the ‘true volatility’ measure against which forecasting performance is measured. It is argued that the standard approach of using ex post daily squared returns as the measure of ‘true volatility’ includes a large noisy component. An alternative measure for ‘true volatility’ has therefore been suggested, based upon the cumulative squared returns from intra‐day data. This paper implements that technique and reports that, in a dataset of 17 daily exchange rate series, the GARCH model outperforms smoothing and moving average techniques which have been previously identified as providing superior volatility forecasts. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
From the editors     
Political forecasting provides the contextuality needed for decision-making and for forecasting ‘non-political’ trends. To gear political forecasting to these needs, rather than mimicking approaches in other areas, requires recognition of the distinctive nature of political trends, and realism regarding forecast uses, which generally do not benefit from ‘precise’ probabilities, predictions of only major events, or ‘sophisticated’ methodology that sacrifices comprehensiveness for explicitness. Approaches borrowed from other forecasting disciplines have been counterproductive, although contextual approaches, including cross-impact analyses and developmental constructs that integrate political and non-political trends, are promising. Explorations of the consistency of scenario dynamics, taking into account policy responses and non-formalizable complexity, are also useful. Thus the separation of political forecasting from political analysis should be minimized, calling for a redirection of effort away from developing methodology uniquely geared to forecasting, and towards organizing more comprehensive and systematic analytical efforts.  相似文献   

3.
This article stresses how little is known about the quality, particularly the relative quality, of macroeconometric models. Most economists make a strict distinction between the quality of a model per se and the accuracy of solutions based on that model. While this distinction is valid, it leaves unanswered how to compare the‘validity’of conditional models. The standard test, the accuracy of ex post simulations, is not definitive when models with differing degrees of exogeneity are compared. In addition, it is extremely difficult to estimate the relative quantitative importance of conceptual problems of models, such as parameter instability across‘policy regimes’ In light of the difficulty in comparisons of conditional macroeconometric models, many model-builders and users assume that the best models are those that have been used to make the most accurate forecasts are those made with the best models. Forecasting experience indicates that forecasters using macroeconometric models have produced more accurate macroeconomic forecasts than either naive or sophisticated unconditional statistical models. It also suggests that judgementally adjusted forecasts have been more accurate than model-based forecasts generated mechanically. The influence of econometrically-based forecasts is now so pervasive that it is difficult to find examples of‘purely judgemental’forecasts.  相似文献   

4.
This paper presents a comparative analysis of the sources of error in forecasts for the UK economy published over a recent four-year period by four independent groups. This analysis rests on the archiving at the ESRC Macroeconomic Modelling Bureau of the original forecasts together with all their accompanying assumptions and adjustments. A method of decomposing observed forecast errors so as to distinguish the contributions of forecaster and model is set out; the impact of future expectations treated in a ‘model-consistent’ or ‘rational’ manner is specifically considered. The results show that the forecaster's adjustments make a substantial contribution to forecast performance, a good part of which comes from adjustments that bring the model on track at the start of the forecast period. The published ex-ante forecasts are usually superior to pure model-based ex-post forecasts, whose performance indicates some misspecification of the underlying models.  相似文献   

5.
6.
Methods of time series forecasting are proposed which can be applied automatically. However, they are not rote formulae, since they are based on a flexible philosophy which can provide several models for consideration. In addition it provides diverse diagnostics for qualitatively and quantitatively estimating how well one can forecast a series. The models considered are called ARARMA models (or ARAR models) because the model fitted to a long memory time series (t) is based on sophisticated time series analysis of AR (or ARMA) schemes (short memory models) fitted to residuals Y(t) obtained by parsimonious‘best lag’non-stationary autoregression. Both long range and short range forecasts are provided by an ARARMA model Section 1 explains the philosophy of our approach to time series model identification. Sections 2 and 3 attempt to relate our approach to some standard approaches to forecasting; exponential smoothing methods are developed from the point of view of prediction theory (section 2) and extended (section 3). ARARMA models are introduced (section 4). Methods of ARARMA model fitting are outlined (sections 5,6). Since‘the proof of the pudding is in the eating’, the methods proposed are illustrated (section 7) using the classic example of international airline passengers.  相似文献   

7.
Variance intervention is a simple state-space approach to handling sharp discontinuities of level or slope in the states or parameters of models for non-stationary time-series. It derives from earlier procedures used in the 1960s for the design of self-adaptive, state variable feedback control systems. In the alternative state-space forecasting context considered in the present paper, it is particularly useful when applied to structural time series models. The paper compares the variance intervention procedure with the related ‘subjective intervention’ approach proposed by West and Harrison in a recent issue of the Journal of Forecasting, and demonstrates it efficacy by application to various time-series data, including those used by West and Harrison.  相似文献   

8.
Forecasts from quarterly econometric models are typically revised on a monthly basis to reflect the information in current economic data. The revision process usually involves setting targets for the quarterly values of endogenous variables for which monthly observations are available and then altering the intercept terms in the quarterly forecasting model to achieve the target values. A formal statistical approach to the use of monthly data to update quarterly forecasts is described and the procedure is applied to the Michigan Quarterly Econometric Model of the US Economy. The procedure is evaluated in terms of both ex post and ex ante forecasting performance. The ex ante results for 1986 and 1987 indicate that the method is quite promising. With a few notable exceptions, the formal procedure produces forecasts of GNP growth that are very close to the published ex ante forecasts.  相似文献   

9.
The standard approach to combining n expert forecasts involves taking a weighted average. Granger and Ramanathan proposed introducing an intercept term and unnormalized weights. This paper deduces their proposal from Bayesian principles. We find that their formula is equivalent to taking a weighted average of the n expert forecasts plus the decision-maker's prior forecast.  相似文献   

10.
Forecasting new-product performance has been called ‘one of the most difficult and critical management tasks’. It has attracted considerable attention because of the magnitude of the resources devoted to product development and because of the sizeable risks involved in making the go–no-go decisions. In comparison with forecasting sales for established products, there is no sales history, or more generally, the company has no product specific experience related to consumer acceptance, trade support and competitive reactions. This article first presents a review of new product forecasting techniques with an emphasis given to the more recent developments in forecasting models. Then, forecasting procedures are assessed by discussing their benefits and their costs. The third part of the article discusses trends in new product forecasting.  相似文献   

11.
In this paper we compare the out of sample forecasts from four alternative interest rate models based on expanding information sets. The random walk model is the most restrictive. The univariate time series model allows for a richer dynamic pattern and more conditioning information on own rates. The multivariate time series model permits a flexible dynamic pattern with own- and cross-series information. Finally, the forecasts from the MPS econometric model depend on the full model structure and information set. In theory, more information is preferred to less. In practice, complicated misspecified models can perform much worse than simple (also probably misspecified) models. For forecasts evaluated over the volatile 1970s the multivariate time series model forecasts are considerably better than those from simpler models which use less conditioning information, as well as forecasts from the MPS model which uses substantially more conditioning information but also imposes ‘structural’ economic restrictions.  相似文献   

12.
This paper evaluates six optimal and four ad hoc recursive combination methods on five actual data sets. The performance of all methods is compared to the mean and recursive least squares. A modification to one method is proposed and evaluated. The recursive methods were found to be very effective from start-up on two of the data sets. Where the optimal methods worked well so did the ad hoc ones, suggesting that often combination methods allowing ‘local bias’ adjustment may be preferable to the mean forecast and comparable to the optimal methods.  相似文献   

13.
System-based combination weights for series r/step-length h incorporate relative accuracy information from other forecast step-lengths for r and from other series for step-length h. Such weights are examined utilizing the West and Fullerton (1996) data set-4275 ex ante employment forecasts from structural simultaneous equation econometric models for 19 metropolitan areas at 10 quarterly step-lengths and a parallel set of 4275 ARIMA forecasts. The system-based weights yielded combined forecasts of higher average accuracy and lower risk of large inaccuracy than seven alternative strategies: (1) averaging; (2) relative MSE weights; (3) outperformance (per cent best) weights; (4) Bates and Granger (1969) optimal weights with a convexity constraint imposed; (5) unconstrained optimal weights; (6) select a ‘best’ method (ex ante) by series and; (7) experiment in the Bischoff (1989) sense and select either method (2) or (6) based on the outcome of e experiment. Accuracy gains of the system-based combination were concentrated at step-lengths two to five. Although alternative (5) was generally outperformed, none of the six other alternatives was systematically most accurate when evaluated relative to each other. This contrasts with Bischoff's (1989) results that held promise for an empirically applicable guideline to determine whether or not to combine.  相似文献   

14.
In 1918, Henry de Dorlodot—priest, theologian, and professor of geology at the University of Louvain (Belgium)—published Le Darwinisme au point de vue de l'Orthodoxie Catholique (translated as Darwinism and Catholic Thought) in which he defended a reconciliation between evolutionary theory and Catholicism with his own particular kind of theistic evolutionism. He subsequently announced a second volume in which he would extend his conclusions to the origin of Man. Traditionalist circles in Rome reacted vehemently. Operating through the Pontifical Biblical Commission, they tried to force Dorlodot to withdraw his book and to publicly disown his ideas by threatening him with an official condemnation, a strategy that had been used against Catholic evolutionists since the late nineteenth century. The archival material on the ‘Dorlodot affair’ shows how this policy ‘worked’ in the early stages of the twentieth century but also how it would eventually reach the end of its logic. The growing popularity of theistic evolutionism among Catholic intellectuals, combined with Dorlodot's refusal to pull back amidst threats, made certain that the traditionalists did not get their way completely, and the affair ended in an uncomfortable status quo. Dorlodot did not receive the official condemnation that had been threatened, nor did he withdraw his theories, although he stopped short on publishing on the subject. With the decline of the traditionalists’ power and authority, the policy of denunciation towards evolutionists made way for a growing tolerance. The ‘Dorlodot affair’—which occurred in a pivotal era in the history of the Church—can be seen as exemplary with regards to the changing attitude of the Roman authorities towards evolutionism in the first half of the twentieth century.  相似文献   

15.
Category management—a relatively new function in marketing—involves large-scale, real-time forecasting of multiple data series in complex environments. In this paper, we illustrate how Bayesian Vector Auto regression (BVAR) fulfils the category manager's decision-support requirements by providing accurate forecasts of a category's state variables (prices, volumes and advertising levels), incorporating management interventions (merchandising events such as end-aisle displays), and revealing competitive dynamics through impulse response analyses. Using 124 weeks of point-of-sale scanner data comprising 31 variables for four brands, we compare the out-of-sample forecasts from BVAR to forecasts from exponential smoothing, univariate and multivariate Box-Jenkins transfer function analyses, and multivariate ARMA models. Theil U's indicate that BVAR forecasts are superior to those from alternate approaches. In large-scale forecasting applications, BVAR's ease of identification and parsimonious use of degrees of freedom are particularly valuable.  相似文献   

16.
Most economic forecast evaluations dating back 20 years show that professional forecasters add little to the forecasts generated by the simplest of models. Using various types of forecast error criteria, these evaluations usually conclude that the professional forecasts are little better than the no-change or ARIM A type forecast. It is our contention that this conclusion is mistaken because the conventional error criteria may not capture why forecasts are ma& or how they are used. Using forecast directional accuracy, the criterion which has been found to be highly correlated with profits in an interest rate setting, we find that professional GNP forecasts dominate the cheaper alternatives. Moreover, there appears to be no systematic relationship between this preferred criterion and the error measures used in previous studies.  相似文献   

17.
What happens when you take the idea of the biblical Adam—the first human – and apply it to insects? You create an origin story for Nature’s tiniest creatures, one that gives them ‘a Pedigree as ancient as the first creation’. This the naturalist Robert Hooke argued in his treatise, the Micrographia (1665). In what follows, I will retrace how Hooke endeavoured to show that insects—then widely believed to have arisen out of the dirt – were the products of an ancient lineage. These genealogies, while constructed from empirical observation, were conjectures of the imagination. Section 2 shows how Hooke introduced the concept of a ‘prime parent’ (an Adam-insect) to explain the anatomical similarities between ‘mites’. Section 3 demonstrates how Hooke defined the family of “gnats” as tiny machines built from the same components and relates Hookean genealogies to contemporary ideas about Noah’s Ark. Section 4 shows how Hooke outlined the morphology of ‘insects’ (delineating what we now call arthropods). Section 5 explores how Hooke used fossils to study these animals in the distant past. In sum, Hooke was turning natural history – collecting and describing insects – into natural history: reconstructing their origins.  相似文献   

18.
Volatility forecasting remains an active area of research with no current consensus as to the model that provides the most accurate forecasts, though Hansen and Lunde (2005) have argued that in the context of daily exchange rate returns nothing can beat a GARCH(1,1) model. This paper extends that line of research by utilizing intra‐day data and obtaining daily volatility forecasts from a range of models based upon the higher‐frequency data. The volatility forecasts are appraised using four different measures of ‘true’ volatility and further evaluated using regression tests of predictive power, forecast encompassing and forecast combination. Our results show that the daily GARCH(1,1) model is largely inferior to all other models, whereas the intra‐day unadjusted‐data GARCH(1,1) model generally provides superior forecasts compared to all other models. Hence, while it appears that a daily GARCH(1,1) model can be beaten in obtaining accurate daily volatility forecasts, an intra‐day GARCH(1,1) model cannot be. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
Our paper challenges the conventional wisdom that the flat maximum inflicts the ‘curse of insensitivity’ on the modelling of judgement and decision processes. In particular, we argue that this widely demonstrated failure on the part of conventional statistical methods to differentiate between competing models has a useful role to play in the development of accessible and economical applied systems, since it allows a low cost choice between systems which vary in their cognitive demands on the user and in their ease of development and implementation. To illustrate our thesis, we take two recent applications of linear scoring models used for credit scoring and for the prediction of sudden infant death. The paper discusses the nature and determinants of the flat maximum as well as its role in applied cognition. Other sections mention certain unanswered questions about the development of linear scoring models and briefly describe competing formulations for prediction.  相似文献   

20.
The paper summarizes results of a mail survey of the use of formal forecasting techniques in British manufacturing companies. It appraises the state of awareness of particular techniques and the extent to which they are used in various functional applications. The extent to which the forecasts generated by the techniques influence company action is assessed; and the reasons for the non-use of particular techniques examined. The paper concludes that although an increasing number of companies appreciate the importance of forecasting, the methods used are predominantly naïve and few companies are taking steps to improve the situation through using alternative techniques or through computerizing established techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号