首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The track record of a 20‐year history of density forecasts of state tax revenue in Iowa is studied, and potential improvements sought through a search for better‐performing ‘priors’ similar to that conducted three decades ago for point forecasts by Doan, Litterman and Sims (Econometric Reviews, 1984). Comparisons of the point and density forecasts produced under the flat prior are made to those produced by the traditional (mixed estimation) ‘Bayesian VAR’ methods of Doan, Litterman and Sims, as well as to fully Bayesian ‘Minnesota Prior’ forecasts. The actual record and, to a somewhat lesser extent, the record of the alternative procedures studied in pseudo‐real‐time forecasting experiments, share a characteristic: subsequently realized revenues are in the lower tails of the predicted distributions ‘too often’. An alternative empirically based prior is found by working directly on the probability distribution for the vector autoregression parameters—the goal being to discover a better‐performing entropically tilted prior that minimizes out‐of‐sample mean squared error subject to a Kullback–Leibler divergence constraint that the new prior not differ ‘too much’ from the original. We also study the closely related topic of robust prediction appropriate for situations of ambiguity. Robust ‘priors’ are competitive in out‐of‐sample forecasting; despite the freedom afforded the entropically tilted prior, it does not perform better than the simple alternatives. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
In the light of the still topical nature of ‘bananas and petrol’ being blamed for driving much of the inflationary pressures in Australia in recent times, the ‘headline’ and ‘underlying’ rates of inflation are scrutinised in terms of forecasting accuracy. A general structural time‐series modelling strategy is applied to estimate models for alternative types of Consumer Price Index (CPI) measures. From this, out‐of‐sample forecasts are generated from the various models. The underlying forecasts are subsequently adjusted to facilitate comparison. The Ashley, Granger and Schmalensee (1980) test is then performed to determine whether there is a statistically significant difference between the root mean square errors of the models. The results lend weight to the recent findings of Song (2005) that forecasting models using underlying rates are not systematically inferior to those based on the headline rate. In fact, strong evidence is found that underlying measures produce superior forecasts. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
Initial applications of prediction markets (PMs) indicate that they provide good forecasting instruments in many settings, such as elections, the box office, or product sales. One particular characteristic of these ‘first‐generation’ (G1) PMs is that they link the payoff value of a stock's share to the outcome of an event. Recently, ‘second‐generation’ (G2) PMs have introduced alternative mechanisms to determine payoff values which allow them to be used as preference markets for determining preferences for product concepts or as idea markets for generating and evaluating new product ideas. Three different G2 payoff mechanisms appear in the existing literature, but they have never been compared. This study conceptually and empirically compares the forecasting accuracy of the three G2 payoff mechanisms and investigates their influence on participants' trading behavior. We find that G2 payoff mechanisms perform almost as well as their G1 counterpart, and trading behavior is very similar in both markets (i.e. trading prices and trading volume), except during the very last trading hours of the market. These results indicate that G2 PMs are valid instruments and support their applicability shown in previous studies for developing new product ideas or evaluating new product concepts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
We introduce a new strategy for the prediction of linear temporal aggregates; we call it ‘hybrid’ and study its performance using asymptotic theory. This scheme consists of carrying out model parameter estimation with data sampled at the highest available frequency and the subsequent prediction with data and models aggregated according to the forecasting horizon of interest. We develop explicit expressions that approximately quantify the mean square forecasting errors associated with the different prediction schemes and that take into account the estimation error component. These approximate estimates indicate that the hybrid forecasting scheme tends to outperform the so‐called ‘all‐aggregated’ approach and, in some instances, the ‘all‐disaggregated’ strategy that is known to be optimal when model selection and estimation errors are neglected. Unlike other related approximate formulas existing in the literature, those proposed in this paper are totally explicit and require neither assumptions on the second‐order stationarity of the sample nor Monte Carlo simulations for their evaluation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
Using a structural time‐series model, the forecasting accuracy of a wide range of macroeconomic variables is investigated. Specifically of importance is whether the Henderson moving‐average procedure distorts the underlying time‐series properties of the data for forecasting purposes. Given the weight of attention in the literature to the seasonal adjustment process used by various statistical agencies, this study hopes to address the dearth of literature on ‘trending’ procedures. Forecasts using both the trended and untrended series are generated. The forecasts are then made comparable by ‘detrending’ the trended forecasts, and comparing both series to the realised values. Forecasting accuracy is measured by a suite of common methods, and a test of significance of difference is applied to the respective root mean square errors. It is found that the Henderson procedure does not lead to deterioration in forecasting accuracy in Australian macroeconomic variables on most occasions, though the conclusions are very different between the one‐step‐ahead and multi‐step‐ahead forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
From the editors     
Political forecasting provides the contextuality needed for decision-making and for forecasting ‘non-political’ trends. To gear political forecasting to these needs, rather than mimicking approaches in other areas, requires recognition of the distinctive nature of political trends, and realism regarding forecast uses, which generally do not benefit from ‘precise’ probabilities, predictions of only major events, or ‘sophisticated’ methodology that sacrifices comprehensiveness for explicitness. Approaches borrowed from other forecasting disciplines have been counterproductive, although contextual approaches, including cross-impact analyses and developmental constructs that integrate political and non-political trends, are promising. Explorations of the consistency of scenario dynamics, taking into account policy responses and non-formalizable complexity, are also useful. Thus the separation of political forecasting from political analysis should be minimized, calling for a redirection of effort away from developing methodology uniquely geared to forecasting, and towards organizing more comprehensive and systematic analytical efforts.  相似文献   

8.
Hierarchical time series arise in various fields such as manufacturing and services when the products or services can be hierarchically structured. “Top-down” and “bottom-up” forecasting approaches are often used for forecasting such hierarchical time series. In this paper, we develop a new hybrid approach (HA) with step-size aggregation for hierarchical time series forecasting. The new approach is a weighted average of the two classical approaches with the weights being optimally chosen for all the series at each level of the hierarchy to minimize the variance of the forecast errors. The independent selection of weights for all the series at each level of the hierarchy makes the HA inconsistent while aggregating suitably across the hierarchy. To address this issue, we introduce a step-size aggregate factor that represents the relationship between forecasts of the two consecutive levels of the hierarchy. The key advantage of the proposed HA is that it captures the structure of the hierarchy inherently due to the combination of the hierarchical approaches instead of independent forecasts of all the series at each level of the hierarchy. We demonstrate the performance of the new approach by applying it to the monthly data of ‘Industrial’ category of M3-Competition as well as on Pakistan energy consumption data.  相似文献   

9.
‘Bayesian forecasting’ is a time series method of forecasting which (in the United Kingdom) has become synonymous with the state space formulation of Harrison and Stevens (1976). The approach is distinct from other time series methods in that it envisages changes in model structure. A disjoint class of models is chosen to encompass the changes. Each data point is retrospectively evaluated (using Bayes theorem) to judge which of the models held. Forecasts are then derived conditional on an assumed model holding true. The final forecasts are weighted sums of these conditional forecasts. Few empirical evaluations have been carried out. This paper reports a large scale comparison of time series forecasting methods including the Bayesian. The approach is two fold: a simulation study to examine parameter sensitivity and an empirical study which contrasts Bayesian with other time series methods.  相似文献   

10.
The purpose of this paper is to apply the Box–Jenkins methodology to ARIMA models and determine the reasons why in empirical tests it is found that the post-sample forecasting the accuracy of such models is generally worse than much simpler time series methods. The paper concludes that the major problem is the way of making the series stationary in its mean (i.e. the method of differencing) that has been proposed by Box and Jenkins. If alternative approaches are utilized to remove and extrapolate the trend in the data, ARMA models outperform the models selected through Box–Jenkins methodology. In addition, it is shown that using ARMA models to seasonally adjusted data slightly improves post-sample accuracies while simplifying the use of ARMA models. It is also confirmed that transformations slightly improve post-sample forecasting accuracy, particularly for long forecasting horizons. Finally, it is demonstrated that AR(1), AR(2) and ARMA(1,1) models can produce more accurate post-sample forecasts than those found through the application of Box–Jenkins methodology.© 1997 John Wiley & Sons, Ltd.  相似文献   

11.
We present a forecasting model based on fuzzy pattern recognition and weighted linear regression. In this model fuzzy pattern recognition is used to find homogeneous fuzzy classes in a heterogeneous data set. It is assumed that the classes represent typical situations. For each class a weighted regression analysis is conducted. The forecasting results obtained by the class regression analysis are aggregated to obtain the ‘overall’ estimation of the regression model. We apply the model to the forecasting of economic data of the USA. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

12.
This paper proposes a theory to explain why some forecasting organizations institutionalize forecast accuracy evaluation while others do not. The theory considers internal and external aspects of managerial, political, and procedural factors as they affect forecasting organizations. The theory is then tested using data from a survey of the US Federal Forecasters Group. Though some support for the theory is developed, multiple alternative explanations for results and the ‘public’ nature of the sample organizations prevent wide-scale generalization. The results suggest that larger organizations are more likely to have some form of forecast evaluation than smaller units. The institutionalization of forecast accuracy evaluation is closely linked to internal managerial and procedural factors, while external political pressure tends to reduce the likelihood of institutionalization of evaluation of forecast accuracy.© 1997 John Wiley & Sons, Ltd.  相似文献   

13.
The problem of medium to long‐term sales forecasting raises a number of requirements that must be suitably addressed in the design of the employed forecasting methods. These include long forecasting horizons (up to 52 periods ahead), a high number of quantities to be forecasted, which limits the possibility of human intervention, frequent introduction of new articles (for which no past sales are available for parameter calibration) and withdrawal of running articles. The problem has been tackled by use of a damped‐trend Holt–Winters method as well as feedforward multilayer neural networks (FMNNs) applied to sales data from two German companies. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
The S-shaped growth curves such as Gompertz, logistic, normal and Weibuli are widely used for forecasting technological substitutions. A family of data-based transformed (DBT) models, which are linear in the regression parameters, including the above-mentioned four models as special cases has been shown to be quite useful for short-term forecasts. This paper explores modeling the technology penetration data directly with assumed S-shaped growth curves. The resulting models, which are nonlinear in the regression parameters, also incorporate proper dependence structure and power transformation. It appears that the nonlinear modeling is a viable alternative to the DBT and other conventional forecasting models in forecasting technological substitutions. Hence, an appropriate strategy is to consider the nonlinear modeling approaches as possible alternatives and use the data at hand to select, via pseudo-cross-validation, the best model for forecasting purposes.  相似文献   

15.
While forecasting involves forward/predictive thinking, it depends crucially on prior diagnosis for suggesting a model of the phenomenon, for defining‘relevant’variables, and for evaluating forecast accuracy via the model. The nature of diagnostic thinking is examined with respect to these activities. We first consider the difficulties of evaluating forecast accuracy without a causal model of what generates outcomes. We then discuss the development of models by considering how attention is directed to variables via analogy and metaphor as well as by what is unusual or abnormal. The causal relevance of variables is then assessed by reference to probabilistic signs called‘cues to causality’. These are: temporal order, constant conjunction, contiguity in time and space, number of alternative explanations, similarity, predictive validity, and robustness. The probabilistic nature of the cues is emphasized by discussing the concept of spurious correlation and how causation does not necessarily imply correlation. Implications for improving forecasting are considered with respect to the above issues.  相似文献   

16.
Forecasting for nonlinear time series is an important topic in time series analysis. Existing numerical algorithms for multi‐step‐ahead forecasting ignore accuracy checking, alternative Monte Carlo methods are also computationally very demanding and their accuracy is difficult to control too. In this paper a numerical forecasting procedure for nonlinear autoregressive time series models is proposed. The forecasting procedure can be used to obtain approximate m‐step‐ahead predictive probability density functions, predictive distribution functions, predictive mean and variance, etc. for a range of nonlinear autoregressive time series models. Examples in the paper show that the forecasting procedure works very well both in terms of the accuracy of the results and in the ability to deal with different nonlinear autoregressive time series models. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
When evaluating the launch of a new product or service, forecasts of the diffusion path and the effects of the marketing mix are critically important. Currently no unified framework exists to provide guidelines on the inclusion and specification of marketing mix variables into models of innovation diffusion. The objective of this research is to examine empirically the role of prices in diffusion models, in order to establish whether price can be incorporated effectively into the simpler time-series models. Unlike existing empirical research which examines the models' fit to historical data, we examine the predictive validity of alternative models. Only if the incorporation of prices improves the predictive performance of diffusion models can it be argued that these models have validity. A series of diffusion models which include prices are compared against a number of well-accepted diffusion models, including the Bass (1969) model, and more recently developed ‘flexible’ diffusion models. For short data series and long-lead time forecasting, the situation typical of practical situations, price rarely added to the forecasting capability of simpler time-series models. Copyright © 1998 John Wiley & Sons, Ltd.  相似文献   

18.
Demand for skiing expanded rapidly in the 1980s, fell quite dramatically at the start of the 1990s as the economy declined but has not subsequently recovered. Two possible explanations are explored. The first is based on perceiving skiing as a new product to most consumers, which reached maximum growth in 1989. Current levels now largely represent ‘repeat buyers’. The alternative approach sees the growth as the result of economic factors, particularly credit conditions. The importance of these factors was not, however, constant, and grew with the changes in the financial system. Thus the recovery had a muted effect. These two approaches are modelled, estimated and the results compared by both residual and ex post forecasting analysis. The paper concludes that the varying coefficient econometric model probably produces the most reliable forecasts. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

19.
This article introduces a novel framework for analysing long‐horizon forecasting of the near non‐stationary AR(1) model. Using the local to unity specification of the autoregressive parameter, I derive the asymptotic distributions of long‐horizon forecast errors both for the unrestricted AR(1), estimated using an ordinary least squares (OLS) regression, and for the random walk (RW). I then identify functions, relating local to unity ‘drift’ to forecast horizon, such that OLS and RW forecasts share the same expected square error. OLS forecasts are preferred on one side of these ‘forecasting thresholds’, while RW forecasts are preferred on the other. In addition to explaining the relative performance of forecasts from these two models, these thresholds prove useful in developing model selection criteria that help a forecaster reduce error. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

20.
This study reports the results of an experiment that examines (1) the effects of forecast horizon on the performance of probability forecasters, and (2) the alleged existence of an inverse expertise effect, i.e., an inverse relationship between expertise and probabilistic forecasting performance. Portfolio managers are used as forecasters with substantive expertise. Performance of this ‘expert’ group is compared to the performance of a ‘semi-expert’ group composed of other banking professionals trained in portfolio management. It is found that while both groups attain their best discrimination performances in the four-week forecast horizon, they show their worst calibration and skill performances in the 12-week forecast horizon. Also, while experts perform better in all performance measures for the one-week horizon, semi-experts achieve better calibration for the four-week horizon. It is concluded that these results may signal the existence of an inverse expertise effect that is contingent on the selected forecast horizon.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号