首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
In this article, we propose a regression model for sparse high‐dimensional data from aggregated store‐level sales data. The modeling procedure includes two sub‐models of topic model and hierarchical factor regressions. These are applied in sequence to accommodate high dimensionality and sparseness and facilitate managerial interpretation. First, the topic model is applied to aggregated data to decompose the daily aggregated sales volume of a product into sub‐sales for several topics by allocating each unit sale (“word” in text analysis) in a day (“document”) into a topic based on joint‐purchase information. This stage reduces the dimensionality of data inside topics because the topic distribution is nonuniform and product sales are mostly allocated into smaller numbers of topics. Next, the market response regression model for the topic is estimated from information about items in the same topic. The hierarchical factor regression model we introduce, based on canonical correlation analysis for original high‐dimensional sample spaces, further reduces the dimensionality within topics. Feature selection is then performed on the basis of the credible interval of the parameters' posterior density. Empirical results show that (i) our model allows managerial implications from topic‐wise market responses according to the particular context, and (ii) it performs better than do conventional category regressions in both in‐sample and out‐of‐sample forecasts.  相似文献   

3.
We extract information on relative shopping interest from Google search volume and provide a genuine and economically meaningful approach to directly incorporate these data into a portfolio optimization technique. By generating a firm ranking based on a Google search volume metric, we can predict future sales and thus generate excess returns in a portfolio exercise. The higher the (shopping) search volume for a firm, the higher we rank the company in the optimization process. For a sample of firms in the fashion industry, our results demonstrate that shopping interest exhibits predictive content that can be exploited in a real‐time portfolio strategy yielding robust alphas around 5.5%.  相似文献   

4.
In this paper, we use Google Trends data for exchange rate forecasting in the context of a broad literature review that ties the exchange rate movements with macroeconomic fundamentals. The sample covers 11 OECD countries’ exchange rates for the period from January 2004 to June 2014. In out‐of‐sample forecasting of monthly returns on exchange rates, our findings indicate that the Google Trends search query data do a better job than the structural models in predicting the true direction of changes in nominal exchange rates. We also observed that Google Trends‐based forecasts are better at picking up the direction of the changes in the monthly nominal exchange rates after the Great Recession era (2008–2009). Based on the Clark and West inference procedure of equal predictive accuracy testing, we found that the relative performance of Google Trends‐based exchange rate predictions against the null of a random walk model is no worse than the purchasing power parity model. On the other hand, although the monetary model fundamentals could beat the random walk null only in one out of 11 currency pairs, with Google Trends predictors we found evidence of better performance for five currency pairs. We believe that these findings necessitate further research in this area to investigate the extravalue one can get from Google search query data.  相似文献   

5.
This paper examines the predictive relationship of consumption‐related and news‐related Google Trends data to changes in private consumption in the USA. The results suggest that (1) Google Trends‐augmented models provide additional information about consumption over and above survey‐based consumer sentiment indicators, (2) consumption‐related Google Trends data provide information about pre‐consumption research trends, (3) news‐related Google Trends data provide information about changes in durable goods consumption, and (4) the combination of news and consumption‐related data significantly improves forecasting models. We demonstrate that applying these insights improves forecasts of private consumption growth over forecasts that do not utilize Google Trends data and over forecasts that use Google Trends data, but do not take into account the specific ways in which it informs forecasts.  相似文献   

6.
This paper proposes an approach that models and forecasts sales through a flexible parametric response function (multifunctional), allowing for differentiated behavioural assumptions of the response determinants to be specified, and uses neural network modelling as a re‐specification tool for the response model in order to improve forecasting performance. An initial experiment on a sample of sales data demonstrates feasibility and gives comparative insights via alternative model specifications. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

7.
Internet search data could be a useful source of information for policymakers when formulating decisions based on their understanding of the current economic environment. This paper builds on earlier literature via a structured value assessment of the data provided by Google Trends. This is done through two empirical exercises related to the forecasting of changes in UK unemployment. Firstly, economic intuition provides the basis for search term selection, with a resulting Google indicator tested alongside survey‐based variables in a traditional forecasting environment. Secondly, this environment is expanded into a pseudo‐time nowcasting framework which provides the backdrop for assessing the timing advantage that Google data have over surveys. The framework is underpinned by a MIDAS regression which allows, for the first time, the easy incorporation of Internet search data at its true sampling rate into a nowcast model for predicting unemployment. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Forecasting category or industry sales is a vital component of a company's planning and control activities. Sales for most mature durable product categories are dominated by replacement purchases. Previous sales models which explicitly incorporate a component of sales due to replacement assume there is an age distribution for replacements of existing units which remains constant over time. However, there is evidence that changes in factors such as product reliability/durability, price, repair costs, scrapping values, styling and economic conditions will result in changes in the mean replacement age of units. This paper develops a model for such time‐varying replacement behaviour and empirically tests it in the Australian automotive industry. Both longitudinal census data and the empirical analysis of the replacement sales model confirm that there has been a substantial increase in the average aggregate replacement age for motor vehicles over the past 20 years. Further, much of this variation could be explained by real price increases and a linear temporal trend. Consequently, the time‐varying model significantly outperformed previous models both in terms of fitting and forecasting the sales data. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

9.
Because of their natural adherence to the climate and pronounced seasonal cycles, prices of field crops constitute an interesting field for exploring seasonal time series models. We consider quarterly prices of two major cereals: barley and wheat. Using traditional in‐sample fit and moving‐window techniques, we investigate whether seasonality is deterministic or unit‐root stochastic and whether seasonal cycles have converged over time. We find that seasonal cycles in the data are mainly deterministic and that evidence on common cycles across countries differs for the two commodities. Out‐of‐sample prediction experiments, however, yield a ranking with respect to accuracy that does not match the statistical in‐sample evidence. Parametric bootstrap experiments establish that the observed mismatch is indeed an inherent and systematic feature. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
In this study we introduce a new indicator for private consumption based on search query time series provided by Google Trends. The indicator is based on factors extracted from consumption‐related search categories of the Google Trends application Insights for Search. The forecasting performance of the new indicator is assessed relative to the two most common survey‐based indicators: the University of Michigan Consumer Sentiment Index and the Conference Board Consumer Confidence Index. The results show that in almost all conducted in‐sample and out‐of‐sample forecasting experiments the Google indicator outperforms the survey‐based indicators. This suggests that incorporating information from Google Trends may offer significant benefits to forecasters of private consumption. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
Most non‐linear techniques give good in‐sample fits to exchange rate data but are usually outperformed by random walks or random walks with drift when used for out‐of‐sample forecasting. In the case of regime‐switching models it is possible to understand why forecasts based on the true model can have higher mean squared error than those of a random walk or random walk with drift. In this paper we provide some analytical results for the case of a simple switching model, the segmented trend model. It requires only a small misclassification, when forecasting which regime the world will be in, to lose any advantage from knowing the correct model specification. To illustrate this we discuss some results for the DM/dollar exchange rate. We conjecture that the forecasting result is more general and describes limitations to the use of switching models for forecasting. This result has two implications. First, it questions the leading role of the random walk hypothesis for the spot exchange rate. Second, it suggests that the mean square error is not an appropriate way to evaluate forecast performance for non‐linear models. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

12.
Predicting bank failures is important as it enables bank regulators to take timely actions to prevent bank failures or reduce the cost of rescuing banks. This paper compares the logit model and data mining models in the prediction of bank failures in the USA between 2002 and 2010 using levels and rates of change of 16 financial ratios based on a cross‐section sample. The models are estimated for the in‐sample period 2002–2009, while data for the year 2010 are used for out‐of‐sample tests. The results suggest that the logit model predicts bank failures in‐sample less precisely than data mining models, but produces fewer missed failures and false alarms out‐of‐sample.  相似文献   

13.
This paper investigates whether the forecasting performance of Bayesian autoregressive and vector autoregressive models can be improved by incorporating prior beliefs on the steady state of the time series in the system. Traditional methodology is compared to the new framework—in which a mean‐adjusted form of the models is employed—by estimating the models on Swedish inflation and interest rate data from 1980 to 2004. Results show that the out‐of‐sample forecasting ability of the models is practically unchanged for inflation but significantly improved for the interest rate when informative prior distributions on the steady state are provided. The findings in this paper imply that this new methodology could be useful since it allows us to sharpen our forecasts in the presence of potential pitfalls such as near unit root processes and structural breaks, in particular when relying on small samples. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper we compare the in‐sample fit and out‐of‐sample forecasting performance of no‐arbitrage quadratic, essentially affine and dynamic Nelson–Siegel term structure models. In total, 11 model variants are evaluated, comprising five quadratic, four affine and two Nelson–Siegel models. Recursive re‐estimation and out‐of‐sample 1‐, 6‐ and 12‐month‐ahead forecasts are generated and evaluated using monthly US data for yields observed at maturities of 1, 6, 12, 24, 60 and 120 months. Our results indicate that quadratic models provide the best in‐sample fit, while the best out‐of‐sample performance is generated by three‐factor affine models and the dynamic Nelson–Siegel model variants. Statistical tests fail to identify one single best forecasting model class. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
Google Trends data is a dataset increasingly employed for many statistical investigations. However, care should be placed in handling this tool, especially when applied for quantitative prediction purposes. Being by design Internet user dependent, estimators based on Google Trends data embody many sources of uncertainty and instability. They are related, for example, to technical (e.g., cross-regional disparities in the degree of computer alphabetization, time dependency of Internet users), psychological (e.g., emotionally driven spikes and other form of data perturbations), linguistic (e.g., noise generated by double-meaning words). Despite the stimulating literature available today on how to use Google Trends data as a forecasting tool, surprisingly, to the best of the author's knowledge, it appears that to date no articles specifically devoted to the prediction of these data have been published. In this paper, a novel forecasting method, based on a denoiser of the wavelet type employed in conjunction with a forecasting model of the class SARIMA (seasonal autoregressive integrated moving average), is presented. The wavelet filter is iteratively calibrated according to a bounded search algorithm, until a minimum of a suitable loss function is reached. Finally, empirical evidence is presented to support the validity of the proposed method.  相似文献   

16.
This is a case study of a closely managed product. Its purpose is to determine whether time-series methods can be appropriate for business planning. By appropriate, we mean two things: whether these methods can model and estimate the special events or features that are often present in sales data; and whether they can forecast accurately enough one, two and four quarters ahead to be useful for business planning. We use two time-series methods, Box-Jenkins modeling and Holt-Winters adaptive forecasting, to obtain forecasts of shipments of a closely managed product. We show how Box-Jenkins transfer-function models can account for the special events in the data. We develop criteria for choosing a final model which differ from the usual methods and are specifically directed towards maximizing the accuracy of next-quarter, next-half-year and next-full-year forecasts. We find that the best Box-Jenkins models give forecasts which are clearly better than those obtained from Holt-Winters forecast functions, and are also better than the judgmental forecasts of IBM's own planners. In conclusion, we judge that Box-Jenkins models can be appropriate for business planning, in particular for determining at the end of the year baseline business-as-usual annual and monthly forecasts for the next year, and in mid-year for resetting the remaining monthly forecasts.  相似文献   

17.
Transfer function or distributed lag models are commonly used in forecasting. The stability of a constant‐coefficient transfer function model, however, may become an issue for many economic variables due in part to the recent advance in technology and improvement in efficiency in data collection and processing. In this paper, we propose a simple functional‐coefficient transfer function model that can accommodate the changing environment. A likelihood ratio statistic is used to test the stability of a traditional transfer function model. We investigate the performance of the test statistic in the finite sample case via simulation. Using some well‐known examples, we demonstrate clearly that the proposed functional‐coefficient model can substantially improve the accuracy of out‐of‐sample forecasts. In particular, our simple modification results in a 25% reduction in the mean squared errors of out‐of‐sample one‐step‐ahead forecasts for the gas‐furnace data of Box and Jenkins. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
Multifractal models have recently been introduced as a new type of data‐generating process for asset returns and other financial data. Here we propose an adaptation of this model for realized volatility. We estimate this new model via generalized method of moments and perform forecasting by means of best linear forecasts derived via the Levinson–Durbin algorithm. Its out‐of‐sample performance is compared against other popular time series specifications. Using an intra‐day dataset for five major international stock market indices, we find that the the multifractal model for realized volatility improves upon forecasts of its earlier counterparts based on daily returns and of many other volatility models. While the more traditional RV‐ARFIMA model comes out as the most successful model (in terms of the number of cases in which it has the best forecasts for all combinations of forecast horizons and evaluation criteria), the new model performs often significantly better during the turbulent times of the recent financial crisis. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
We develop Hawkes models in which events are triggered through self‐excitation as well as cross‐excitation. We examine whether incorporating cross‐excitation improves the forecasts of extremes in asset returns compared to only self‐excitation. The models are applied to US stocks, bonds and dollar exchange rates. We predict the probability of crashes in the series and the value at risk (VaR) over a period that includes the financial crisis of 2008 using a moving window. A Lagrange multiplier test suggests the presence of cross‐excitation for these series. Out‐of‐sample, we find that the models that include spillover effects forecast crashes and the VaR significantly more accurately than the models without these effects. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
This paper investigates whether and to what extent multiple encompassing tests may help determine weights for forecast averaging in a standard vector autoregressive setting. To this end we consider a new test‐based procedure, which assigns non‐zero weights to candidate models that add information not covered by other models. The potential benefits of this procedure are explored in extensive Monte Carlo simulations using realistic designs that are adapted to UK and to French macroeconomic data, to which trivariate vector autoregressions (VAR) are fitted. Thus simulations rely on potential data‐generating mechanisms for macroeconomic data rather than on simple but artificial designs. We run two types of forecast ‘competitions’. In the first one, one of the model classes is the trivariate VAR, such that it contains the generating mechanism. In the second specification, none of the competing models contains the true structure. The simulation results show that the performance of test‐based averaging is comparable to uniform weighting of individual models. In one of our role model economies, test‐based averaging achieves advantages in small samples. In larger samples, pure prediction models outperform forecast averages. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号