首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Prior studies use a linear adaptive expectations model to describe how analysts revise their forecasts of future earnings in response to current forecast errors. However, research shows that extreme forecast errors are less likely than small forecast errors to persist in future years. If analysts recognize this property, their marginal forecast revisions should decrease with the forecast error's magnitude. Therefore, a linear model is likely to be unsatisfactory at describing analysts' forecast revisions. We find that a non‐linear model better describes the relation between analysts' forecast revisions and their forecast errors, and provides a richer theoretical framework for explaining analysts' forecasting behaviour. Our results are consistent with analysts' recognizing the permanent and temporary nature of forecast errors of differing magnitudes. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

2.
This paper shows how to extract the density of information shocks from revisions of the Bank of England's inflation density forecasts. An information shock is defined in this paper as a random variable that contains the set of information made available between two consecutive forecasting exercises and that has been incorporated into a revised forecast for a fixed point event. Studying the moments of these information shocks can be useful in understanding how the Bank has changed its assessment of risks surrounding inflation in the light of new information, and how it has modified its forecasts accordingly. The variance of the information shock is interpreted in this paper as a new measure of ex ante inflation uncertainty that measures the uncertainty that the Bank anticipates information perceived in a particular quarter will pose on inflation. A measure of information absorption that indicates the approximate proportion of the information content in a revised forecast that is attributable to information made available since the last forecast release is also proposed.  相似文献   

3.
People are reluctant to admit mistakes. This could also be true of economic forecasters. If revisions of past forecasts are costly, then it will become optimal for forecasters to only partially adjust a past forecast in the light of new information. The unwillingness to admit to the mistake in the old forecast generates a bias of the new forecast in the direction of the old forecast. We test this hypothesis for the joint predictions of the Association of German Economic Research Institutes over the last 35 years. We find some evidence for such a bias and compute the implied unwillingness to revise forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
A modeling approach to real‐time forecasting that allows for data revisions is shown. In this approach, an observed time series is decomposed into stochastic trend, data revision, and observation noise in real time. It is assumed that the stochastic trend is defined such that its first difference is specified as an AR model, and that the data revision, obtained only for the latest part of the time series, is also specified as an AR model. The proposed method is applicable to the data set with one vintage. Empirical applications to real‐time forecasting of quarterly time series of US real GDP and its eight components are shown to illustrate the usefulness of the proposed approach. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
Recently developed structural models of the global crude oil market imply that the surge in the real price of oil between mid 2003 and mid 2008 was driven by repeated positive shocks to the demand for all industrial commodities, reflecting unexpectedly high growth mainly in emerging Asia. We evaluate this proposition using an alternative data source and a different econometric methodology. Rather than inferring demand shocks from an econometric model, we utilize a direct measure of global demand shocks based on revisions of professional real gross domestic product (GDP) growth forecasts. We show that forecast surprises during 2003–2008 were associated primarily with unexpected growth in emerging economies (in conjunction with much smaller positive GDP‐weighted forecast surprises in the major industrialized economies), that markets were repeatedly surprised by the strength of this growth, that these surprises were associated with a hump‐shaped response of the real price of oil that reaches its peak after 12–16 months, and that news about global growth predict much of the surge in the real price of oil from mid 2003 until mid 2008 and much of its subsequent decline. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
The main purpose of this study is to analyse the magnitude and the nature of the revisions that the time varying seasonal filters of the X-II and X-II-ARIMA methods introduce in the current seasonally adjusted series The total revision is measured by the mean absolute difference of the transfer functions corresponding to the forecasting and the concurrent seasonal filters with respect to the central‘final’seasonal filter. To take into consideration the fact that the spectrum of a typical economic time series peaks at the low and seasonal frequencies, the revision measures are calculated for selected frequency intervals associated to the trend-cycle, seasonal variations and the irregular component.  相似文献   

7.
Extrapolative forecasting models have been available for many years and as most organizations have the need to regularly develop forecasts one might anticipate the widespread use of these models. The evidence in Australia indicates that computer based forecasting systems are not being widely used and in fact a number of established systems have been discarded, with the issue of forecast accuracy often being mentioned as a problem area. Two experiments are carried out to examine this issue by comparing judgemental and quantitative forecasts. Other problem areas mentioned as contributing to the abandonment of forecasting systems include the difficulty of manually reviewing the computer forecasts and the effort required to carefully massage the forecast database to remove extraordinary events.  相似文献   

8.
The TFT‐LCD (thin‐film transistor–liquid crystal display) industry is one of the key global industries with products that have high clock speed. In this research, the LCD monitor market is considered for an empirical study on hierarchical forecasting (HF). The proposed HF methodology consists of five steps. First, the three hierarchical levels of the LCD monitor market are identified. Second, several exogenously driven factors that significantly affect the demand for LCD monitors are identified at each level of product hierarchy. Third, the three forecasting techniques—regression analysis, transfer function, and simultaneous equations model—are combined to forecast future demand at each hierarchical level. Fourth, various forecasting approaches and disaggregating proportion methods are adopted to obtain consistent demand forecasts at each hierarchical level. Finally, the forecast errors with different forecasting approaches are assessed in order to determine the best forecasting level and the best forecasting approach. The findings show that the best forecast results can be obtained by using the middle‐out forecasting approach. These results could guide LCD manufacturers and brand owners on ways to forecast future market demands. Copyright 2008 John Wiley & Sons, Ltd.  相似文献   

9.
In this study the interaction of forecasting method (econometric versus exponential smoothing) and two situational factors are evaluated for their effects upon accuracy. Data from two independent sets of ex ante quarterly forecasts for 19 classes of mail were used to test hypotheses. Counter to expectations, the findings revealed that forecasting method did not interact with the forecast time horizon (short versus long term). However, as hypothesized, forecasting method interacted significantly with product/market definition (First Class versus other mail), an indicator of buyer sensitivity to marketing/environmental changes. Results are discussed in the context of future research on forecast accuracy.  相似文献   

10.
Two types of forecasting methods have been receiving increasing attention by electric utility forecasters. The first type, called end-use forecasting, is recognized as an approach which is well suited for forecasting during periods characterized by technological change. The method is straightforward. The stock levels of energy-consuming equipment are forecast, as well as the energy consumption characteristics of the equipment. The final forecast is the product of the stock and usage characteristics. This approach is well suited to forecasting long time periods when technological change, equipment depletion and replacement, and other structural changes are evident. For time periods of shorter duration, these factors are static and variations are more likely to result from shocks to the environment. The shocks influence the usage of the equipment. A second forecasting approach using time-series analysis has been demonstrated to be superior for these applications. This paper discusses the integration of the two methods into a unified system. The result is a time-series model whose parameter effects become dynamic in character. An example of the models being used at the Georgia Power Company is presented. It is demonstrated that a time-series model which incorporates end-use stock and usage information is superior—even in short-term forecasting situations—to a similar time-series model which excludes the information.  相似文献   

11.
This article introduces a novel framework for analysing long‐horizon forecasting of the near non‐stationary AR(1) model. Using the local to unity specification of the autoregressive parameter, I derive the asymptotic distributions of long‐horizon forecast errors both for the unrestricted AR(1), estimated using an ordinary least squares (OLS) regression, and for the random walk (RW). I then identify functions, relating local to unity ‘drift’ to forecast horizon, such that OLS and RW forecasts share the same expected square error. OLS forecasts are preferred on one side of these ‘forecasting thresholds’, while RW forecasts are preferred on the other. In addition to explaining the relative performance of forecasts from these two models, these thresholds prove useful in developing model selection criteria that help a forecaster reduce error. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper we suggest a framework to assess the degree of reliability of provisional estimates as forecasts of final data, and we re‐examine the question of the most appropriate way in which available data should be used for ex ante forecasting in the presence of a data‐revision process. Various desirable properties for provisional data are suggested, as well as procedures for testing them, taking into account the possible non‐stationarity of economic variables. For illustration, the methodology is applied to assess the quality of the US M1 data production process and to derive a conditional model whose performance in forecasting is then tested against other alternatives based on simple transformations of provisional data or of past final data. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

13.
The forecasting of prices for electricity balancing reserve power can essentially improve the trading positions of market participants in competitive auctions. Having identified a lack of literature related to forecasting balancing reserve prices, we deploy approaches originating from econometrics and artificial intelligence and set up a forecasting framework based on autoregressive and exogenous factors. We use SARIMAX models as well as neural networks with different structures and forecast based on a rolling one-step forecast with reestimation of the models. It turns out that the naive forecast performs reasonably well but is outperformed by the more advanced models. In addition, neural network approaches outperform the econometric approach in terms of forecast quality, whereas for the further use of the generated models the econometric approach has advantages in terms of explaining price drivers. For the present application, more advanced configurations of the neural networks are not able to further improve the forecasting performance.  相似文献   

14.
A family of finite end filters is constructed using a minimum revisions criterion and based on a local dynamic model operating within the span of a given finite central filter. These end filters are equivalent to evaluating the central filter with unavailable future observations replaced by constrained optimal linear predictions. Two prediction methods are considered: best linear unbiased prediction and best linear biased prediction where the bias is time invariant. The properties of these end filters are determined. In particular, they are compared to X‐11 end filters and to the case where the central filter is evaluated with unavailable future observations predicted by global ARIMA models as in X‐11‐ARIMA or X‐12‐ARIMA. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

15.
A forecasting model based on high-frequency market makers quotes of financial instruments is presented. The statistical behaviour of these time series leads to discussion of the appropriate time scale for forecasting. We introduce variable time scales in a general way and define the new concept of intrinsic time. The latter reflects better the actual trading activity. Changing time scale means forecasting in two steps, first an intrinsic time forecast against physical time, then a price forecast against intrinsic time. The forecasting model consists, for both steps, of a linear combination of non-linear price-based indicators. The indicator weights are continuously re-optimized through a modified linear regression on a moving sample of past prices. The out-of-sample performance of this algorithm is reported on a set of important FX rates and interest rates over many years. It is remarkably consistent. Results for short horizons as well as techniques to measure this performance are discussed.  相似文献   

16.
In this paper an investigation is made of the properties and use of two aggregate measures of forecast bias and accuracy. These are metrics used in business to calculate aggregate forecasting performance for a family (group) of products. We find that the aggregate measures are not particularly informative if some of the one‐step‐ahead forecasts are biased. This is likely to be the case in practice if frequently employed forecasting methods are used to generate a large number of individual forecasts. In the paper, examples are constructed to illustrate some potential problems in the use of the metrics. We propose a simple graphical display of forecast bias and accuracy to supplement the information yielded by the accuracy measures. This support includes relevant boxplots of measures of individual forecasting success. This tool is simple but helpful as the graphic display has the potential to indicate forecast deterioration that can be masked by one or both of the aggregate metrics. The procedures are illustrated with data representing sales of food items. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
It is well understood that the standard formulation for the variance of a regression‐model forecast produces interval estimates that are too narrow, principally because it ignores regressor forecast error. While the theoretical problem has been addressed, there has not been an adequate explanation of the effect of regressor forecast error, and the empirical literature has supplied a disparate variety of bits and pieces of evidence. Most business‐forecasting software programs continue to supply only the standard formulation. This paper extends existing analysis to derive and evaluate large‐sample approximations for the forecast error variance in a single‐equation regression model. We show how these approximations substantially clarify the expected effects of regressor forecast error. We then present a case study, which (a) demonstrates how rolling out‐of‐sample evaluations can be applied to obtain empirical estimates of the forecast error variance, (b) shows that these estimates are consistent with our large‐sample approximations and (c) illustrates, for ‘typical’ data, how seriously the standard formulation can understate the forecast error variance. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

18.
Time-series data are often contaminated with outliers due to the influence of unusual and non-repetitive events. Forecast accuracy in such situations is reduced due to (1) a carry-over effect of the outlier on the point forecast and (2) a bias in the estimates of model parameters. Hillmer (1984) and Ledolter (1989) studied the effect of additive outliers on forecasts. It was found that forecast intervals are quite sensitive to additive outliers, but that point forecasts are largely unaffected unless the outlier occurs near the forecast origin. In such a situation the carry-over effect of the outlier can be quite substantial. In this study, we investigate the issues of forecasting when outliers occur near or at the forecast origin. We propose a strategy which first estimates the model parameters and outlier effects using the procedure of Chen and Liu (1993) to reduce the bias in the parameter estimates, and then uses a lower critical value to detect outliers near the forecast origin in the forecasting stage. One aspect of this study is on the carry-over effects of outliers on forecasts. Four types of outliers are considered: innovational outlier, additive outlier, temporary change, and level shift. The effects due to a misidentification of an outlier type are examined. The performance of the outlier detection procedure is studied for cases where outliers are near the end of the series. In such cases, we demonstrate that statistical procedures may not be able to effectively determine the outlier types due to insufficient information. Some strategies are recommended to reduce potential difficulties caused by incorrectly detected outlier types. These findings may serve as a justification for forecasting in conjunction with judgment. Two real examples are employed to illustrate the issues discussed.  相似文献   

19.
In this paper we investigate the impact of data revisions on forecasting and model selection procedures. A linear ARMA model and nonlinear SETAR model are considered in this study. Two Canadian macroeconomic time series have been analyzed: the real‐time monetary aggregate M3 (1977–2000) and residential mortgage credit (1975–1998). The forecasting method we use is multi‐step‐ahead non‐adaptive forecasting. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
Most long memory forecasting studies assume that long memory is generated by the fractional difference operator. We argue that the most cited theoretical arguments for the presence of long memory do not imply the fractional difference operator and assess the performance of the autoregressive fractionally integrated moving average (ARFIMA) model when forecasting series with long memory generated by nonfractional models. We find that ARFIMA models dominate in forecast performance regardless of the long memory generating mechanism and forecast horizon. Nonetheless, forecasting uncertainty at the shortest forecast horizon could make short memory models provide suitable forecast performance, particularly for smaller degrees of memory. Additionally, we analyze the forecasting performance of the heterogeneous autoregressive (HAR) model, which imposes restrictions on high-order AR models. We find that the structure imposed by the HAR model produces better short and medium horizon forecasts than unconstrained AR models of the same order. Our results have implications for, among others, climate econometrics and financial econometrics models dealing with long memory series at different forecast horizons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号