首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper presents results of a survey designed to discover how sales forecasting management practices have changed over the past 20 years as compared to findings reported by Mentzer and Cox (1984) and Mentzer and Kahn (1995). An up‐to‐date overview of empirical studies on forecasting practice is also presented. A web‐based survey of forecasting executives was employed to explore trends in forecasting management, familiarity, satisfaction, usage, and accuracy among companies in a variety of industries. Results revealed decreased familiarity with forecasting techniques, and decreased levels of forecast accuracy. Implications for managers and suggestions for future research are presented. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
Forecasting category or industry sales is a vital component of a company's planning and control activities. Sales for most mature durable product categories are dominated by replacement purchases. Previous sales models which explicitly incorporate a component of sales due to replacement assume there is an age distribution for replacements of existing units which remains constant over time. However, there is evidence that changes in factors such as product reliability/durability, price, repair costs, scrapping values, styling and economic conditions will result in changes in the mean replacement age of units. This paper develops a model for such time‐varying replacement behaviour and empirically tests it in the Australian automotive industry. Both longitudinal census data and the empirical analysis of the replacement sales model confirm that there has been a substantial increase in the average aggregate replacement age for motor vehicles over the past 20 years. Further, much of this variation could be explained by real price increases and a linear temporal trend. Consequently, the time‐varying model significantly outperformed previous models both in terms of fitting and forecasting the sales data. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
This paper develops a new diffusion model that incorporates the indirect network externality. The market with indirect network externalities is characterized by two‐way interactive effects between hardware and software products on their demands. Our model incorporates two‐way interactions in forecasting the diffusion of hardware products based on a simple but realistic assumption. The new model is parsimonious, easy to estimate, and does not require more data points than the Bass diffusion model. The new diffusion model was applied to forecast sales of DVD players in the United States and in South Korea, and to the sales of Digital TV sets in Australia. When compared to the Bass and NSRL diffusion models, the new model showed better performance in forecasting long‐term sales. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

4.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

5.
It is widely recognized that taking cointegration relationships into consideration is useful in forecasting cointegrated processes. However, there are a few practical problems when forecasting large cointegrated processes using the well‐known vector error correction model. First, it is hard to identify the cointegration rank in large models. Second, since the number of parameters to be estimated tends to be large relative to the sample size in large models, estimators will have large standard errors, and so will forecasts. The purpose of the present paper is to propose a new procedure for forecasting large cointegrated processes which is free from the above problems. In our Monte Carlo experiment, we find that our forecast gains accuracy when we work with a larger model as long as the ratio of the cointegration rank to the number of variables in the process is high. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
We extend the analysis of Christoffersen and Diebold (1998) on long‐run forecasting in cointegrated systems to multicointegrated systems. For the forecast evaluation we consider several loss functions, each of which has a particular interpretation in the context of stock‐flow models where multicointegration typically occurs. A loss function based on a standard mean square forecast error (MSFE) criterion focuses on the forecast errors of the flow variables alone. Likewise, a loss function based on the triangular representation of cointegrated systems (suggested by Christoffersen and Diebold) considers forecast errors associated with changes in both stock (modelled through the cointegrating restrictions) and flow variables. We suggest a new loss function based on the triangular representation of multicointegrated systems which further penalizes deviations from the long‐run relationship between the levels of stock and flow variables as well as changes in the flow variables. Among other things, we show that if one is concerned with all possible long‐run relations between stock and flow variables, this new loss function entails high and increasing forecasting gains compared to both the standard MSFE criterion and Christoffersen and Diebold's criterion. This paper demonstrates the importance of carefully selecting loss functions in forecast evaluation of models involving stock and flow variables. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
Several studies have tested for long‐range dependence in macroeconomic and financial time series but very few have assessed the usefulness of long‐memory models as forecast‐generating mechanisms. This study tests for fractional differencing in the US monetary indices (simple sum and divisia) and compares the out‐of‐sample fractional forecasts to benchmark forecasts. The long‐memory parameter is estimated using Robinson's Gaussian semi‐parametric and multivariate log‐periodogram methods. The evidence amply suggests that the monetary series possess a fractional order between one and two. Fractional out‐of‐sample forecasts are consistently more accurate (with the exception of the M3 series) than benchmark autoregressive forecasts but the forecasting gains are not generally statistically significant. In terms of forecast encompassing, the fractional model encompasses the autoregressive model for the divisia series but neither model encompasses the other for the simple sum series. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we present results of a simulation study to assess and compare the accuracy of forecasting techniques for long‐memory processes in small sample sizes. We analyse differences between adaptive ARMA(1,1) L‐step forecasts, where the parameters are estimated by minimizing the sum of squares of L‐step forecast errors, and forecasts obtained by using long‐memory models. We compare widths of the forecast intervals for both methods, and discuss some computational issues associated with the ARMA(1,1) method. Our results illustrate the importance and usefulness of long‐memory models for multi‐step forecasting. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

10.
This paper presents an autoregressive fractionally integrated moving‐average (ARFIMA) model of nominal exchange rates and compares its forecasting capability with the monetary structural models and the random walk model. Monthly observations are used for Canada, France, Germany, Italy, Japan and the United Kingdom for the period of April 1973 through December 1998. The estimation method is Sowell's (1992) exact maximum likelihood estimation. The forecasting accuracy of the long‐memory model is formally compared to the random walk and the monetary models, using the recently developed Harvey, Leybourne and Newbold (1997) test statistics. The results show that the long‐memory model is more efficient than the random walk model in steps‐ahead forecasts beyond 1 month for most currencies and more efficient than the monetary models in multi‐step‐ahead forecasts. This new finding strongly suggests that the long‐memory model of nominal exchange rates be studied as a viable alternative to the conventional models. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
This paper studies some forms of LASSO‐type penalties in time series to reduce the dimensionality of the parameter space as well as to improve out‐of‐sample forecasting performance. In particular, we propose a method that we call WLadaLASSO (weighted lag adaptive LASSO), which assigns not only different weights to each coefficient but also further penalizes coefficients of higher‐lagged covariates. In our Monte Carlo implementation, the WLadaLASSO is superior in terms of covariate selection, parameter estimation precision and forecasting, when compared to both LASSO and adaLASSO, especially for a higher number of candidate lags and a stronger linear dependence between predictors. Empirical studies illustrate our approach for US risk premium and US inflation forecasting with good results. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
This paper proposes a mixed‐frequency error correction model for possibly cointegrated non‐stationary time series sampled at different frequencies. We highlight the impact, in terms of model specification, of the choice of the particular high‐frequency explanatory variable to be included in the cointegrating relationship, which we call a dynamic mixed‐frequency cointegrating relationship. The forecasting performance of aggregated models and several mixed‐frequency regressions are compared in a set of Monte Carlo experiments. In particular, we look at both the unrestricted mixed‐frequency model and at a more parsimonious MIDAS regression. Whereas the existing literature has only investigated the potential improvements of the MIDAS framework for stationary time series, our study emphasizes the need to include the relevant cointegrating vectors in the non‐stationary case. Furthermore, it is illustrated that the choice of dynamic mixed‐frequency cointegrating relationship does not matter as long as the short‐run dynamics are adapted accordingly. Finally, the unrestricted model is shown to suffer from parameter proliferation for samples of relatively small size, whereas MIDAS forecasts are robust to over‐parameterization. We illustrate our results for the US inflation rate. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
Initial applications of prediction markets (PMs) indicate that they provide good forecasting instruments in many settings, such as elections, the box office, or product sales. One particular characteristic of these ‘first‐generation’ (G1) PMs is that they link the payoff value of a stock's share to the outcome of an event. Recently, ‘second‐generation’ (G2) PMs have introduced alternative mechanisms to determine payoff values which allow them to be used as preference markets for determining preferences for product concepts or as idea markets for generating and evaluating new product ideas. Three different G2 payoff mechanisms appear in the existing literature, but they have never been compared. This study conceptually and empirically compares the forecasting accuracy of the three G2 payoff mechanisms and investigates their influence on participants' trading behavior. We find that G2 payoff mechanisms perform almost as well as their G1 counterpart, and trading behavior is very similar in both markets (i.e. trading prices and trading volume), except during the very last trading hours of the market. These results indicate that G2 PMs are valid instruments and support their applicability shown in previous studies for developing new product ideas or evaluating new product concepts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
This paper examines the problem of how to validate multiple‐period density forecasting models. Such models are more difficult to validate than their single‐period equivalents, because consecutive observations are subject to common shocks that undermine i.i.d. The paper examines various solutions to this problem, and proposes a new solution based on the application of standard tests to a resample that is constructed to be i.i.d. It suggests that this solution is superior to alternatives, and presents results indicating that tests based on the i.i.d. resample approach have good power. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
Artificial neural network (ANN) combined with signal decomposing methods is effective for long‐term streamflow time series forecasting. ANN is a kind of machine learning method utilized widely for streamflow time series, and which performs well in forecasting nonstationary time series without the need of physical analysis for complex and dynamic hydrological processes. Most studies take multiple factors determining the streamflow as inputs such as rainfall. In this study, a long‐term streamflow forecasting model depending only on the historical streamflow data is proposed. Various preprocessing techniques, including empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD) and discrete wavelet transform (DWT), are first used to decompose the streamflow time series into simple components with different timescale characteristics, and the relation between these components and the original streamflow at the next time step is analyzed by ANN. Hybrid models EMD‐ANN, EEMD‐ANN and DWT‐ANN are developed in this study for long‐term daily streamflow forecasting, and performance measures root mean square error (RMSE), mean absolute percentage error (MAPE) and Nash–Sutcliffe efficiency (NSE) indicate that the proposed EEMD‐ANN method performs better than EMD‐ANN and DWT‐ANN models, especially in high flow forecasting.  相似文献   

16.
We develop a semi‐structural model for forecasting inflation in the UK in which the New Keynesian Phillips curve (NKPC) is augmented with a time series model for marginal cost. By combining structural and time series elements we hope to reap the benefits of both approaches, namely the relatively better forecasting performance of time series models in the short run and a theory‐consistent economic interpretation of the forecast coming from the structural model. In our model we consider the hybrid version of the NKPC and use an open‐economy measure of marginal cost. The results suggest that our semi‐structural model performs better than a random‐walk forecast and most of the competing models (conventional time series models and strictly structural models) only in the short run (one quarter ahead) but it is outperformed by some of the competing models at medium and long forecast horizons (four and eight quarters ahead). In addition, the open‐economy specification of our semi‐structural model delivers more accurate forecasts than its closed‐economy alternative at all horizons. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
It is investigated whether euro area variables can be forecast better based on synthetic time series for the pre‐euro period or by using just data from Germany for the pre‐euro period. Our forecast comparison is based on quarterly data for the period 1970Q1–2003Q4 for 10 macroeconomic variables. The years 2000–2003 are used as forecasting period. A range of different univariate forecasting methods is applied. Some of them are based on linear autoregressive models and we also use some nonlinear or time‐varying coefficient models. It turns out that most variables which have a similar level for Germany and the euro area such as prices can be better predicted based on German data, while aggregated European data are preferable for forecasting variables which need considerable adjustments in their levels when joining German and European Monetary Union (EMU) data. These results suggest that for variables which have a similar level for Germany and the euro area it may be reasonable to consider the German pre‐EMU data for studying economic problems in the euro area. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
The specification choices of vector autoregressions (VARs) in forecasting are often not straightforward, as they are complicated by various factors. To deal with model uncertainty and better utilize multiple VARs, this paper adopts the dynamic model averaging/selection (DMA/DMS) algorithm, in which forecasting models are updated and switch over time in a Bayesian manner. In an empirical application to a pool of Bayesian VAR (BVAR) models whose specifications include level and difference, along with differing lag lengths, we demonstrate that specification‐switching VARs are flexible and powerful forecast tools that yield good performance. In particular, they beat the overall best BVAR in most cases and are comparable to or better than the individual best models (for each combination of variable, forecast horizon, and evaluation metrics) for medium‐ and long‐horizon forecasts. We also examine several extensions in which forecast model pools consist of additional individual models in partial differences as well as all level/difference models, and/or time variations in VAR innovations are allowed, and discuss the potential advantages and disadvantages of such specification choices. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
A ten-year retrospective study of Mentzer and Cox (1984) was undertaken to answer the question 'Have sales forecasting practices changed over the past ten years?' A mail survey of 207 forecasting executives was employed to investigate this important question. Findings revealed both discrepancies and similarities between today's sales forecasting practices and those of ten years ago. One particular finding indicated greater reliance on and satisfaction with quantitative forecasting techniques today versus ten years ago. Another indicated that forecasting accuracy has not improved over the past ten years, even though the familiarity and usage of various sophisticated sales forecasting techniques have increased. Future research and managerial implications are discussed based on these and other findings.  相似文献   

20.
This paper proposes and implements a new methodology for forecasting time series, based on bicorrelations and cross‐bicorrelations. It is shown that the forecasting technique arises as a natural extension of, and as a complement to, existing univariate and multivariate non‐linearity tests. The formulations are essentially modified autoregressive or vector autoregressive models respectively, which can be estimated using ordinary least squares. The techniques are applied to a set of high‐frequency exchange rate returns, and their out‐of‐sample forecasting performance is compared to that of other time series models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号