首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Initial applications of prediction markets (PMs) indicate that they provide good forecasting instruments in many settings, such as elections, the box office, or product sales. One particular characteristic of these ‘first‐generation’ (G1) PMs is that they link the payoff value of a stock's share to the outcome of an event. Recently, ‘second‐generation’ (G2) PMs have introduced alternative mechanisms to determine payoff values which allow them to be used as preference markets for determining preferences for product concepts or as idea markets for generating and evaluating new product ideas. Three different G2 payoff mechanisms appear in the existing literature, but they have never been compared. This study conceptually and empirically compares the forecasting accuracy of the three G2 payoff mechanisms and investigates their influence on participants' trading behavior. We find that G2 payoff mechanisms perform almost as well as their G1 counterpart, and trading behavior is very similar in both markets (i.e. trading prices and trading volume), except during the very last trading hours of the market. These results indicate that G2 PMs are valid instruments and support their applicability shown in previous studies for developing new product ideas or evaluating new product concepts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
Forecasting category or industry sales is a vital component of a company's planning and control activities. Sales for most mature durable product categories are dominated by replacement purchases. Previous sales models which explicitly incorporate a component of sales due to replacement assume there is an age distribution for replacements of existing units which remains constant over time. However, there is evidence that changes in factors such as product reliability/durability, price, repair costs, scrapping values, styling and economic conditions will result in changes in the mean replacement age of units. This paper develops a model for such time‐varying replacement behaviour and empirically tests it in the Australian automotive industry. Both longitudinal census data and the empirical analysis of the replacement sales model confirm that there has been a substantial increase in the average aggregate replacement age for motor vehicles over the past 20 years. Further, much of this variation could be explained by real price increases and a linear temporal trend. Consequently, the time‐varying model significantly outperformed previous models both in terms of fitting and forecasting the sales data. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
The contribution of product and industry knowledge to the accuracy of sales forecasting was investigated by examining the company forecasts of a leading manufacturer and marketer of consumable products. The company forecasts of 18 products produced by a meeting of marketing, sales, and production personnel were compared with those generated by the same company personnel when denied specific product knowledge and with the forecasts of selected judgemental and statistical time series methods. Results indicated that product knowledge contributed significantly to forecast accuracy and that the forecast accuracy of company personnel who possessed industry forecasting knowledge (but not product knowledge) was not significantly different from the time series based methods. Furthermore, the company forecasts were more accurate than averages of the judgemental and statistical time series forecasts. These results point to the importance of specific product information to forecast accuracy and accordingly call into question the continuing strong emphasis on improving extrapolation techniques without consideration of the inclusion of non-time series knowledge.  相似文献   

4.
The model presented in this paper integrates two distinct components of the demand for durable goods: adoptions and replacements. The adoption of a new product is modeled as an innovation diffusion process, using price and population as exogenous variables. Adopters are expected to eventually replace their old units of the product, with a probability which depends on the age of the owned unit, and other random factors such as overload, style-changes etc. It is shovn that the integration of adoption and replacement demand components in our model yields quality sales forecasts, not only under conditions where detailed data on replacement sales is available, but also when the forecaster's access is limited to total sales data and educated guesses on certain elements of the replacement process.  相似文献   

5.
The problem of medium to long‐term sales forecasting raises a number of requirements that must be suitably addressed in the design of the employed forecasting methods. These include long forecasting horizons (up to 52 periods ahead), a high number of quantities to be forecasted, which limits the possibility of human intervention, frequent introduction of new articles (for which no past sales are available for parameter calibration) and withdrawal of running articles. The problem has been tackled by use of a damped‐trend Holt–Winters method as well as feedforward multilayer neural networks (FMNNs) applied to sales data from two German companies. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
This paper develops a new diffusion model that incorporates the indirect network externality. The market with indirect network externalities is characterized by two‐way interactive effects between hardware and software products on their demands. Our model incorporates two‐way interactions in forecasting the diffusion of hardware products based on a simple but realistic assumption. The new model is parsimonious, easy to estimate, and does not require more data points than the Bass diffusion model. The new diffusion model was applied to forecast sales of DVD players in the United States and in South Korea, and to the sales of Digital TV sets in Australia. When compared to the Bass and NSRL diffusion models, the new model showed better performance in forecasting long‐term sales. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
A number of researchers have developed models that use test market data to generate forecasts of a new product's performance. However, most of these models have ignored the effects of marketing covariates. In this paper we examine what impact these covariates have on a model's forecasting performance and explore whether their presence enables us to reduce the length of the model calibration period (i.e. shorten the duration of the test market). We develop from first principles a set of models that enable us to systematically explore the impact of various model ‘components’ on forecasting performance. Furthermore, we also explore the impact of the length of the test market on forecasting performance. We find that it is critically important to capture consumer heterogeneity, and that the inclusion of covariate effects can improve forecast accuracy, especially for models calibrated on fewer than 20 weeks of data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

8.
A ten-year retrospective study of Mentzer and Cox (1984) was undertaken to answer the question 'Have sales forecasting practices changed over the past ten years?' A mail survey of 207 forecasting executives was employed to investigate this important question. Findings revealed both discrepancies and similarities between today's sales forecasting practices and those of ten years ago. One particular finding indicated greater reliance on and satisfaction with quantitative forecasting techniques today versus ten years ago. Another indicated that forecasting accuracy has not improved over the past ten years, even though the familiarity and usage of various sophisticated sales forecasting techniques have increased. Future research and managerial implications are discussed based on these and other findings.  相似文献   

9.
When quantitative models are used for short-term multi-item sales forecasts it is possible that the managers who use such forecasts may disagree with at least some of the estimates obtained, and wish to change them so that they become more consistent with their own (subjective) evaluation of the marketplace. This study reports on an analysis of the effectiveness of judgemental revision of sales forecasts over six quarterly forecasting periods. The results give general support for the practice of forecast manipulation as a means of improving forecasting accuracy. It is also observed that the effectiveness of revision activity varies across different time periods.  相似文献   

10.
Longevity risk has become one of the major risks facing the insurance and pensions markets globally. The trade in longevity risk is underpinned by accurate forecasting of mortality rates. Using techniques from macroeconomic forecasting we propose a dynamic factor model of mortality that fits and forecasts age‐specific mortality rates parsimoniously. We compare the forecasting quality of this model against the Lee–Carter model and its variants. Our results show the dynamic factor model generally provides superior forecasts when applied to international mortality data. We also show that existing multifactorial models have superior fit but their forecasting performance worsens as more factors are added. The dynamic factor approach used here can potentially be further improved upon by applying an appropriate stopping rule for the number of static and dynamic factors. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
This paper investigates the forecasting ability of unobserved component models, when compared with the standard ARIMA univariate approach. A forecasting exercise is carried out with each method, using monthly time series of automobile sales in Spain. The accuracy of the different methods is assessed by comparing several measures of forecasting performance based on the out-of-sample predictions for various horizons, as well as different assumptions on the models’ parameters. Overall there seems little to choose between the methods in forecasting performance terms but the recursive unobserved component models provide greater flexibility for adaptive applications. © 1997 by John Wiley & Sons, Ltd.  相似文献   

12.
Migration is one of the most unpredictable demographic processes. The aim of this article is to provide a blueprint for assessing various possible forecasting approaches in order to help safeguard producers and users of official migration statistics against misguided forecasts. To achieve that, we first evaluate the various existing approaches to modelling and forecasting of international migration flows. Subsequently, we present an empirical comparison of ex post performance of various forecasting methods, applied to international migration to and from the United Kingdom. The overarching goal is to assess the uncertainty of forecasts produced by using different forecasting methods, both in terms of their errors (biases) and calibration of uncertainty. The empirical assessment, comparing the results of various forecasting models against past migration estimates, confirms the intuition about weak predictability of migration, but also highlights varying levels of forecast errors for different migration streams. There is no single forecasting approach that would be well suited for different flows. We therefore recommend adopting a tailored approach to forecasts, and applying a risk management framework to their results, taking into account the levels of uncertainty of the individual flows, as well as the differences in their potential societal impact.  相似文献   

13.
This paper addresses the issue of forecasting term structure. We provide a unified state‐space modeling framework that encompasses different existing discrete‐time yield curve models. Within such a framework we analyze the impact of two modeling choices, namely the imposition of no‐arbitrage restrictions and the size of the information set used to extract factors, on forecasting performance. Using US yield curve data, we find that both no‐arbitrage and large information sets help in forecasting but no model uniformly dominates the other. No‐arbitrage models are more useful at shorter horizons for shorter maturities. Large information sets are more useful at longer horizons and longer maturities. We also find evidence for a significant feedback from yield curve models to macroeconomic variables that could be exploited for macroeconomic forecasting. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

14.
The judgemental revision of sales forecasts is an issue which is receiving increasing attention in the forecasting literature. This paper compares the performance of forecasts after revision by managers with that of the forecasts which were accepted by them without revision. The data set consists of sales forecasting data from an industrial company, spanning six quarterly periods and relating to some 900 individual products. The findings show that, in general, the improvements made by managers bring the forecast errors of revised forecasts more into line with non-revised forecasts, but the change is often marginal, and the best result is equivalence between revised and non-revised forecasts.  相似文献   

15.
This paper presents results of a survey designed to discover how sales forecasting management practices have changed over the past 20 years as compared to findings reported by Mentzer and Cox (1984) and Mentzer and Kahn (1995). An up‐to‐date overview of empirical studies on forecasting practice is also presented. A web‐based survey of forecasting executives was employed to explore trends in forecasting management, familiarity, satisfaction, usage, and accuracy among companies in a variety of industries. Results revealed decreased familiarity with forecasting techniques, and decreased levels of forecast accuracy. Implications for managers and suggestions for future research are presented. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
Nowcasting has been a challenge in the recent economic crisis. We introduce the Toll Index, a new monthly indicator for business cycle forecasting, and demonstrate its relevance using German data. The index measures the monthly transportation activity performed by heavy transport vehicles across the country and has highly desirable availability properties (insignificant revisions, short publication lags) as a result of the innovative technology underlying its data collection. It is coincident with production activity due to the prevalence of just‐in‐time delivery. The Toll Index is a good early indicator of production as measured, for instance, by the German Production Index, provided by the German Statistical Office, which is a well‐known leading indicator of the gross national product. The proposed new index is an excellent example of technological, innovation‐driven economic telemetry, which we suggest should be established more around the world. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

17.
Standard measures of prices are often contaminated by transitory shocks. This has prompted economists to suggest the use of measures of underlying inflation to formulate monetary policy and assist in forecasting observed inflation. Recent work has concentrated on modelling large data sets using factor models. In this paper we estimate factors from data sets of disaggregated price indices for European countries. We then assess the forecasting ability of these factor estimates against other measures of underlying inflation built from more traditional methods. The power to forecast headline inflation over horizons of 12 to 18 months is adopted as a valid criterion to assess forecasting. Empirical results for the five largest euro area countries, as well as for the euro area itself, are presented. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

18.
It is widely recognized that taking cointegration relationships into consideration is useful in forecasting cointegrated processes. However, there are a few practical problems when forecasting large cointegrated processes using the well‐known vector error correction model. First, it is hard to identify the cointegration rank in large models. Second, since the number of parameters to be estimated tends to be large relative to the sample size in large models, estimators will have large standard errors, and so will forecasts. The purpose of the present paper is to propose a new procedure for forecasting large cointegrated processes which is free from the above problems. In our Monte Carlo experiment, we find that our forecast gains accuracy when we work with a larger model as long as the ratio of the cointegration rank to the number of variables in the process is high. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
A short‐term mixed‐frequency model is proposed to estimate and forecast Italian economic activity fortnightly. We introduce a dynamic one‐factor model with three frequencies (quarterly, monthly, and fortnightly) by selecting indicators that show significant coincident and leading properties and are representative of both demand and supply. We conduct an out‐of‐sample forecasting exercise and compare the prediction errors of our model with those of alternative models that do not include fortnightly indicators. We find that high‐frequency indicators significantly improve the real‐time forecasts of Italian gross domestic product (GDP); this result suggests that models exploiting the information available at different lags and frequencies provide forecasting gains beyond those based on monthly variables alone. Moreover, the model provides a new fortnightly indicator of GDP, consistent with the official quarterly series.  相似文献   

20.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号