首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Croston's method is widely used to predict inventory demand when it is intermittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston's method and three related methods, and we show that any underlying model will be inconsistent with the properties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
This paper proposes a new approach to forecasting intermittent demand by considering the effects of external factors. We classify intermittent demand data into two parts—zero value and nonzero value—and fit nonzero values into a mixed zero-truncated Poisson model. All the parameters in this model are obtained by an EM algorithm, which regards external factors as independent variables of a logistic regression model and log-linear regression model. We then calculate the probability of occurrence of zero value at each period and predict demand occurrence by comparing it with critical value. When demand occurs, we use the weighted average of the mixed zero-truncated Poisson model as predicted nonzero demands, which are combined with predicted demand occurrences to form the final forecasting demand series. Two performance measures are developed to assess the forecasting methods. By presenting a case study of electric power material from the State Grid Shanghai Electric Power Company in China, we show that our approach provides greater accuracy in forecasting than the Poisson model, the hurdle shifted Poisson model, the hurdle Poisson model, and Croston's method.  相似文献   

3.
Forecasting for inventory items with lumpy demand is difficult because of infrequent nonzero demands with high variability. This article developed two methods to forecast lumpy demand: an optimally weighted moving average method and an intelligent pattern‐seeking method. We compare them with a number of well‐referenced methods typically applied over the last 30 years in forecasting intermittent or lumpy demand. The comparison is conducted over about 200,000 forecasts (using 1‐day‐ahead and 5‐day‐ahead review periods) for 24 series of actual product demands across four different error measures. One of the most important findings of our study is that the two non‐traditional methods perform better overall than the traditional methods. We summarize results and discuss managerial implications. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
This paper is a critical review of exponential smoothing since the original work by Brown and Holt in the 1950s. Exponential smoothing is based on a pragmatic approach to forecasting which is shared in this review. The aim is to develop state-of-the-art guidelines for application of the exponential smoothing methodology. The first part of the paper discusses the class of relatively simple models which rely on the Holt-Winters procedure for seasonal adjustment of the data. Next, we review general exponential smoothing (GES), which uses Fourier functions of time to model seasonality. The research is reviewed according to the following questions. What are the useful properties of these models? What parameters should be used? How should the models be initialized? After the review of model-building, we turn to problems in the maintenance of forecasting systems based on exponential smoothing. Topics in the maintenance area include the use of quality control models to detect bias in the forecast errors, adaptive parameters to improve the response to structural changes in the time series, and two-stage forecasting, whereby we use a model of the errors or some other model of the data to improve our initial forecasts. Some of the major conclusions: the parameter ranges and starting values typically used in practice are arbitrary and may detract from accuracy. The empirical evidence favours Holt's model for trends over that of Brown. A linear trend should be damped at long horizons. The empirical evidence favours the Holt-Winters approach to seasonal data over GES. It is difficult to justify GES in standard form–the equivalent ARIMA model is simpler and more efficient. The cumulative sum of the errors appears to be the most practical forecast monitoring device. There is no evidence that adaptive parameters improve forecast accuracy. In fact, the reverse may be true.  相似文献   

5.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
Robust versions of the exponential and Holt–Winters smoothing method for forecasting are presented. They are suitable for forecasting univariate time series in the presence of outliers. The robust exponential and Holt–Winters smoothing methods are presented as recursive updating schemes that apply the standard technique to pre‐cleaned data. Both the update equation and the selection of the smoothing parameters are robustified. A simulation study compares the robust and classical forecasts. The presented method is found to have good forecast performance for time series with and without outliers, as well as for fat‐tailed time series and under model misspecification. The method is illustrated using real data incorporating trend and seasonal effects. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

7.
This paper compares the experience of forecasting the UK government bond yield curve before and after the dramatic lowering of short‐term interest rates from October 2008. Out‐of‐sample forecasts for 1, 6 and 12 months are generated from each of a dynamic Nelson–Siegel model, autoregressive models for both yields and the principal components extracted from those yields, a slope regression and a random walk model. At short forecasting horizons, there is little difference in the performance of the models both prior to and after 2008. However, for medium‐ to longer‐term horizons, the slope regression provided the best forecasts prior to 2008, while the recent experience of near‐zero short interest rates coincides with a period of forecasting superiority for the autoregressive and dynamic Nelson–Siegel models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
This paper presents a methodology for modelling and forecasting multivariate time series with linear restrictions using the constrained structural state‐space framework. The model has natural applications to forecasting time series of macroeconomic/financial identities and accounts. The explicit modelling of the constraints ensures that model parameters dynamically satisfy the restrictions among items of the series, leading to more accurate and internally consistent forecasts. It is shown that the constrained model offers superior forecasting efficiency. A testable identification condition for state space models is also obtained and applied to establish the identifiability of the constrained model. The proposed methods are illustrated on Germany's quarterly monetary accounts data. Results show significant improvement in the predictive efficiency of forecast estimators for the monetary account with an overall efficiency gain of 25% over unconstrained modelling. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
Simultaneous prediction intervals for forecasts from time series models that contain L (L ≤ 1) unknown future observations with a specified probability are derived. Our simultaneous intervals are based on two types of probability inequalities, i.e. the Bonferroni- and product-types. These differ from the marginal intervals in that they take into account the correlation structure between the forecast errors. For the forecasting methods commonly used with seasonal time series data, we show how to construct forecast error correlations and evaluate, using an example, the simultaneous and marginal prediction intervals. For all the methods, the simultaneous intervals are accurate with the accuracy increasing with the use of higher-order probability inequalities, whereas the marginal intervals are far too short in every case. Also, when L is greater than the seasonal period, the simultaneous intervals based on improved probability inequalities will be most accurate.  相似文献   

10.
In this paper we investigate the applicability of several continuous-time stochastic models to forecasting inflation rates with horizons out to 20 years. While the models are well known, new methods of parameter estimation and forecasts are supplied, leading to rigorous testing of out-of-sample inflation forecasting at short and long time horizons. Using US consumer price index data we find that over longer forecasting horizons—that is, those beyond 5 years—the log-normal index model having Ornstein–Uhlenbeck drift rate provides the best forecasts.  相似文献   

11.
Commonly used forecasting methods often produce meaningless forecasts when time series display abrupt changes in level. Measuring and accounting for the effect of discontinuities can have a significant impact on forecasting accuracy. In addition, if discontinuities are considered non-random and their cause is known, then adjustments can be made to more reliably represent the trend, seasonal and random component. This paper concerns a computational method used in forecasting inherently discontinuous time series. The method provides screening to determine the locations and types of discontinuities. The paper includes analyses of actual time series which are typical of certain types of inherently discontinuous processes.  相似文献   

12.
This paper addresses the issue of forecasting individual items within a product line; where each line includes several independent but closely related products. The purpose of the research was to reduce the overall forecasting burden by developing and assessing schemes of disaggregating forecasts of a total product line to the related individual items. Measures were developed to determine appropriate disaggregated methodologies and to compare the forecast accuracy of individual product forecasts versus disaggregated totals. Several of the procedures used were based upon extensions of the combination of forecast research and applied to disaggregations of total forecasts of product lines. The objective was to identify situations when it was advantageous to produce disaggregated forecasts, and if advantageous, which method of disaggregation to utilize. This involved identification of the general conceptual characteristics within a set of product line data that might cause a disaggregation method to produce relatively accurate forecasts. These conceptual characteristics provided guidelines for forecasters on how to select a disaggregation method and under what conditions a particular method is applicable.  相似文献   

13.
This paper is concerned primarily with the evaluation and comparison of objective and subjective weather forecasts. Operational forecasts of three weather elements are considered: (1) probability forecasts of precipitation occurrence, (2) categorical (i.e. non-probabilistic) forecasts of maximum and minimum temperatures and (3) categorical forecasts of cloud amount. The objective forecasts are prepared by numerical-statistical procedures, whereas the subjective forecasts are based on the judgements of individual forecasters. In formulating the latter, the forecasters consult information from a variety of sources, including the objective forecasts themselves. The precipitation probability forecasts are found to be both reliable and skilful, and evaluation of the temperature/cloud amount forecasts reveals that they are quite accurate/skilful. Comparison of the objective and subjective forecasts of precipitation occurrence indicates that the latter are generally more skilful than the former for shorter lead times (e.g. 12–24 hours), whereas the two types of forecasts are of approximately equal skill for longer lead times (e.g. 36–48 hours). Similar results are obtained for the maximum and minimum temperature forecasts. Objective cloud amount forecasts are more skilful than subjective cloud amount forecasts for all lead times. Examination of trends in performance over the last decade reveals that both types of forecasts for all three elements increased in skill (or accuracy) over the period, with improvements in objective forecasts equalling or exceeding improvements in subjective forecasts. The role and impact of the objective forecasts in the subjective weather forecasting process are discussed in some detail. The need to conduct controlled experiments and other studies of this process, with particular reference to the assimilation of information from different sources, is emphasized. Important characteristics of the forecasting system in meteorology are identified, and they are used to describe similarities and differences between weather forecasting and forecasting in other fields. Acquisition of some of these characteristics may be beneficial to other forecasting systems.  相似文献   

14.
Using a structural time‐series model, the forecasting accuracy of a wide range of macroeconomic variables is investigated. Specifically of importance is whether the Henderson moving‐average procedure distorts the underlying time‐series properties of the data for forecasting purposes. Given the weight of attention in the literature to the seasonal adjustment process used by various statistical agencies, this study hopes to address the dearth of literature on ‘trending’ procedures. Forecasts using both the trended and untrended series are generated. The forecasts are then made comparable by ‘detrending’ the trended forecasts, and comparing both series to the realised values. Forecasting accuracy is measured by a suite of common methods, and a test of significance of difference is applied to the respective root mean square errors. It is found that the Henderson procedure does not lead to deterioration in forecasting accuracy in Australian macroeconomic variables on most occasions, though the conclusions are very different between the one‐step‐ahead and multi‐step‐ahead forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
In attempting to improve forecasting, many facets of the forecasting process may be addressed including techniques, psychological factors, and organizational factors. This research examines whether a robust psychological bias (anchoring and adjustment) can be observed in a set of organizationally-produced forecasts. Rather than a simple consistent bias, biases were found to vary across organizations and items being forecast. Such bias patterns suggest that organizational factors may be important in determining the biases found in organizationally-produced forecasts.  相似文献   

16.
The vector multiplicative error model (vector MEM) is capable of analyzing and forecasting multidimensional non‐negative valued processes. Usually its parameters are estimated by generalized method of moments (GMM) and maximum likelihood (ML) methods. However, the estimations could be heavily affected by outliers. To overcome this problem, in this paper an alternative approach, the weighted empirical likelihood (WEL) method, is proposed. This method uses moment conditions as constraints and the outliers are detected automatically by performing a k‐means clustering on Oja depth values of innovations. The performance of WEL is evaluated against those of GMM and ML methods through extensive simulations, in which three different kinds of additive outliers are considered. Moreover, the robustness of WEL is demonstrated by comparing the volatility forecasts of the three methods on 10‐minute returns of the S&P 500 index. The results from both the simulations and the S&P 500 volatility forecasts have shown preferences in using the WEL method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

17.
The increase in oil price volatility in recent years has raised the importance of forecasting it accurately for valuing and hedging investments. The paper models and forecasts the crude oil exchange‐traded funds (ETF) volatility index, which has been used in the last years as an important alternative measure to track and analyze the volatility of future oil prices. Analysis of the oil volatility index suggests that it presents features similar to those of the daily market volatility index, such as long memory, which is modeled using well‐known heterogeneous autoregressive (HAR) specifications and new extensions that are based on net and scaled measures of oil price changes. The aim is to improve the forecasting performance of the traditional HAR models by including predictors that capture the impact of oil price changes on the economy. The performance of the new proposals and benchmarks is evaluated with the model confidence set (MCS) and the Generalized‐AutoContouR (G‐ACR) tests in terms of point forecasts and density forecasting, respectively. We find that including the leverage in the conditional mean or variance of the basic HAR model increases its predictive ability. Furthermore, when considering density forecasting, the best models are a conditional heteroskedastic HAR model that includes a scaled measure of oil price changes, and a HAR model with errors following an exponential generalized autoregressive conditional heteroskedasticity specification. In both cases, we consider a flexible distribution for the errors of the conditional heteroskedastic process.  相似文献   

18.
In this paper an investigation is made of the properties and use of two aggregate measures of forecast bias and accuracy. These are metrics used in business to calculate aggregate forecasting performance for a family (group) of products. We find that the aggregate measures are not particularly informative if some of the one‐step‐ahead forecasts are biased. This is likely to be the case in practice if frequently employed forecasting methods are used to generate a large number of individual forecasts. In the paper, examples are constructed to illustrate some potential problems in the use of the metrics. We propose a simple graphical display of forecast bias and accuracy to supplement the information yielded by the accuracy measures. This support includes relevant boxplots of measures of individual forecasting success. This tool is simple but helpful as the graphic display has the potential to indicate forecast deterioration that can be masked by one or both of the aggregate metrics. The procedures are illustrated with data representing sales of food items. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
Earnings forecasts have received a great deal of attention, much of which has centered on the comparative accuracy of judgmental and objective forecasting methods. Recently, studies have focused on the use of combinations of subjective and objective forecasts to improve forecast accuracy. This research offers an extension on this theme by subjectively modifying an objective forecast. Specifically, ARIMA forecasts are judgmentally adjusted by analysts using a structured approach based on Saaty's (1980) analytic hierarchy process. The results show that the accuracy of the unadjusted objective forecasts can be improved when judgmentally adjusted.  相似文献   

20.
Category management—a relatively new function in marketing—involves large-scale, real-time forecasting of multiple data series in complex environments. In this paper, we illustrate how Bayesian Vector Auto regression (BVAR) fulfils the category manager's decision-support requirements by providing accurate forecasts of a category's state variables (prices, volumes and advertising levels), incorporating management interventions (merchandising events such as end-aisle displays), and revealing competitive dynamics through impulse response analyses. Using 124 weeks of point-of-sale scanner data comprising 31 variables for four brands, we compare the out-of-sample forecasts from BVAR to forecasts from exponential smoothing, univariate and multivariate Box-Jenkins transfer function analyses, and multivariate ARMA models. Theil U's indicate that BVAR forecasts are superior to those from alternate approaches. In large-scale forecasting applications, BVAR's ease of identification and parsimonious use of degrees of freedom are particularly valuable.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号