首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
When evaluating the launch of a new product or service, forecasts of the diffusion path and the effects of the marketing mix are critically important. Currently no unified framework exists to provide guidelines on the inclusion and specification of marketing mix variables into models of innovation diffusion. The objective of this research is to examine empirically the role of prices in diffusion models, in order to establish whether price can be incorporated effectively into the simpler time-series models. Unlike existing empirical research which examines the models' fit to historical data, we examine the predictive validity of alternative models. Only if the incorporation of prices improves the predictive performance of diffusion models can it be argued that these models have validity. A series of diffusion models which include prices are compared against a number of well-accepted diffusion models, including the Bass (1969) model, and more recently developed ‘flexible’ diffusion models. For short data series and long-lead time forecasting, the situation typical of practical situations, price rarely added to the forecasting capability of simpler time-series models. Copyright © 1998 John Wiley & Sons, Ltd.  相似文献   

2.
The model presented in this paper integrates two distinct components of the demand for durable goods: adoptions and replacements. The adoption of a new product is modeled as an innovation diffusion process, using price and population as exogenous variables. Adopters are expected to eventually replace their old units of the product, with a probability which depends on the age of the owned unit, and other random factors such as overload, style-changes etc. It is shovn that the integration of adoption and replacement demand components in our model yields quality sales forecasts, not only under conditions where detailed data on replacement sales is available, but also when the forecaster's access is limited to total sales data and educated guesses on certain elements of the replacement process.  相似文献   

3.
Temperature changes are known to affect the social and environmental determinants of health in various ways. Consequently, excess deaths as a result of extreme weather conditions may increase over the coming decades because of climate change. In this paper, the relationship between trends in mortality and trends in temperature change (as a proxy) is investigated using annual data and for specified (warm and cold) periods during the year in the UK. A thoughtful statistical analysis is implemented and a new stochastic, central mortality rate model is proposed. The new model encompasses the good features of the Lee and Carter (Journal of the American Statistical Association, 1992, 87: 659–671) model and its recent extensions, and for the very first time includes an exogenous factor which is a temperature‐related factor. The new model is shown to provide a significantly better‐fitting performance and more interpretable forecasts. An illustrative example of pricing a life insurance product is provided and discussed.  相似文献   

4.
This paper uses a meta‐analysis to survey existing factor forecast applications for output and inflation and assesses what causes large factor models to perform better or more poorly at forecasting than other models. Our results suggest that factor models tend to outperform small models, whereas factor forecasts are slightly worse than pooled forecasts. Factor models deliver better predictions for US variables than for UK variables, for US output than for euro‐area output and for euro‐area inflation than for US inflation. The size of the dataset from which factors are extracted positively affects the relative factor forecast performance, whereas pre‐selecting the variables included in the dataset did not improve factor forecasts in the past. Finally, the factor estimation technique may matter as well. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
We extend the classic Bass diffusion model to address the case in which existing adopters could depress the growth of adoption, i.e. the diffusion process is self‐restraining. Two modified Bass models are proposed according to whether the negative depressive effect is exerted on potential adopters or existing adopters. We then characterize the diffusion paths for several generalizations. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
This paper develops a new diffusion model that incorporates the indirect network externality. The market with indirect network externalities is characterized by two‐way interactive effects between hardware and software products on their demands. Our model incorporates two‐way interactions in forecasting the diffusion of hardware products based on a simple but realistic assumption. The new model is parsimonious, easy to estimate, and does not require more data points than the Bass diffusion model. The new diffusion model was applied to forecast sales of DVD players in the United States and in South Korea, and to the sales of Digital TV sets in Australia. When compared to the Bass and NSRL diffusion models, the new model showed better performance in forecasting long‐term sales. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
Data are now readily available for a very large number of macroeconomic variables that are potentially useful when forecasting. We argue that recent developments in the theory of dynamic factor models enable such large data sets to be summarized by relatively few estimated factors, which can then be used to improve forecast accuracy. In this paper we construct a large macroeconomic data set for the UK, with about 80 variables, model it using a dynamic factor model, and compare the resulting forecasts with those from a set of standard time‐series models. We find that just six factors are sufficient to explain 50% of the variability of all the variables in the data set. These factors, which can be shown to be related to key variables in the economy, and their use leads to considerable improvements upon standard time‐series benchmarks in terms of forecasting performance. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
This paper investigates whether and to what extent multiple encompassing tests may help determine weights for forecast averaging in a standard vector autoregressive setting. To this end we consider a new test‐based procedure, which assigns non‐zero weights to candidate models that add information not covered by other models. The potential benefits of this procedure are explored in extensive Monte Carlo simulations using realistic designs that are adapted to UK and to French macroeconomic data, to which trivariate vector autoregressions (VAR) are fitted. Thus simulations rely on potential data‐generating mechanisms for macroeconomic data rather than on simple but artificial designs. We run two types of forecast ‘competitions’. In the first one, one of the model classes is the trivariate VAR, such that it contains the generating mechanism. In the second specification, none of the competing models contains the true structure. The simulation results show that the performance of test‐based averaging is comparable to uniform weighting of individual models. In one of our role model economies, test‐based averaging achieves advantages in small samples. In larger samples, pure prediction models outperform forecast averages. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
Accurately forecasting multivariate volatility plays a crucial role for the financial industry. The Cholesky–artificial neural networks specification here presented provides a twofold advantage for this topic. On the one hand, the use of the Cholesky decomposition ensures positive definite forecasts. On the other hand, the implementation of artificial neural networks allows us to specify nonlinear relations without any particular distributional assumption. Out-of-sample comparisons reveal that artificial neural networks are not able to strongly outperform the competing models. However, long-memory detecting networks, like nonlinear autoregressive model process with exogenous input and long short-term memory, show improved forecast accuracy with respect to existing econometric models.  相似文献   

10.
This paper explores the ability of factor models to predict the dynamics of US and UK interest rate swap spreads within a linear and a non‐linear framework. We reject linearity for the US and UK swap spreads in favour of a regime‐switching smooth transition vector autoregressive (STVAR) model, where the switching between regimes is controlled by the slope of the US term structure of interest rates. We compare the ability of the STVAR model to predict swap spreads with that of a non‐linear nearest‐neighbours model as well as that of linear AR and VAR models. We find some evidence that the non‐linear models predict better than the linear ones. At short horizons, the nearest‐neighbours (NN) model predicts better than the STVAR model US swap spreads in periods of increasing risk conditions and UK swap spreads in periods of decreasing risk conditions. At long horizons, the STVAR model increases its forecasting ability over the linear models, whereas the NN model does not outperform the rest of the models. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

11.
In this study we propose several new variables, such as continuous realized semi‐variance and signed jump variations including jump tests, and construct a new heterogeneous autoregressive model for realized volatility models to investigate the impacts that those new variables have on forecasting oil price volatility. In‐sample results indicate that past negative returns have greater effects on future volatility than that of positive returns, and our new signed jump variations have a significantly negative influence on the future volatility. Out‐of‐sample empirical results with several robust checks demonstrate that our proposed models can not only obtain better performance in forecasting volatility but also garner larger economic values than can the existing models discussed in this paper.  相似文献   

12.
Asymmetry has been well documented in the business cycle literature. The asymmetric business cycle suggests that major macroeconomic series, such as a country's unemployment rate, are non‐linear and, therefore, the use of linear models to explain their behaviour and forecast their future values may not be appropriate. Many researchers have focused on providing evidence for the non‐linearity in the unemployment series. Only recently have there been some developments in applying non‐linear models to estimate and forecast unemployment rates. A major concern of non‐linear modelling is the model specification problem; it is very hard to test all possible non‐linear specifications, and to select the most appropriate specification for a particular model. Artificial neural network (ANN) models provide a solution to the difficulty of forecasting unemployment over the asymmetric business cycle. ANN models are non‐linear, do not rely upon the classical regression assumptions, are capable of learning the structure of all kinds of patterns in a data set with a specified degree of accuracy, and can then use this structure to forecast future values of the data. In this paper, we apply two ANN models, a back‐propagation model and a generalized regression neural network model to estimate and forecast post‐war aggregate unemployment rates in the USA, Canada, UK, France and Japan. We compare the out‐of‐sample forecast results obtained by the ANN models with those obtained by several linear and non‐linear times series models currently used in the literature. It is shown that the artificial neural network models are able to forecast the unemployment series as well as, and in some cases better than, the other univariate econometrics time series models in our test. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
This paper examines the information available through leading indicators for modelling and forecasting the UK quarterly index of production. Both linear and non‐linear specifications are examined, with the latter being of the Markov‐switching type as used in many recent business cycle applications. The Markov‐switching models perform relatively poorly in forecasting the 1990s production recession, but a three‐indicator linear specification does well. The leading indicator variables in this latter model include a short‐term interest rate, the stock market dividend yield and the optimism balance from the quarterly CBI survey. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

14.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
This paper aims to assess whether Google search data are useful when predicting the US unemployment rate among other more traditional predictor variables. A weekly Google index is derived from the keyword “unemployment” and is used in diffusion index variants along with the weekly number of initial claims and monthly estimated latent factors. The unemployment rate forecasts are generated using MIDAS regression models that take into account the actual frequencies of the predictor variables. The forecasts are made in real time, and the forecasts of the best forecasting models exceed, for the most part, the root mean squared forecast error of two benchmarks. However, as the forecasting horizon increases, the forecasting performance of the best diffusion index variants decreases over time, which suggests that the forecasting methods proposed in this paper are most useful in the short term.  相似文献   

16.
We introduce a new methodology for forecasting, which we call signal diffusion mapping. Our approach accommodates features of real‐world financial data which have been ignored historically in existing forecasting methodologies. Our method builds upon well‐established and accepted methods from other areas of statistical analysis. We develop and adapt those models for use in forecasting. We also present tests of our model on data in which we demonstrate the efficacy of our approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
We develop a semi‐structural model for forecasting inflation in the UK in which the New Keynesian Phillips curve (NKPC) is augmented with a time series model for marginal cost. By combining structural and time series elements we hope to reap the benefits of both approaches, namely the relatively better forecasting performance of time series models in the short run and a theory‐consistent economic interpretation of the forecast coming from the structural model. In our model we consider the hybrid version of the NKPC and use an open‐economy measure of marginal cost. The results suggest that our semi‐structural model performs better than a random‐walk forecast and most of the competing models (conventional time series models and strictly structural models) only in the short run (one quarter ahead) but it is outperformed by some of the competing models at medium and long forecast horizons (four and eight quarters ahead). In addition, the open‐economy specification of our semi‐structural model delivers more accurate forecasts than its closed‐economy alternative at all horizons. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
We utilize mixed‐frequency factor‐MIDAS models for the purpose of carrying out backcasting, nowcasting, and forecasting experiments using real‐time data. We also introduce a new real‐time Korean GDP dataset, which is the focus of our experiments. The methodology that we utilize involves first estimating common latent factors (i.e., diffusion indices) from 190 monthly macroeconomic and financial series using various estimation strategies. These factors are then included, along with standard variables measured at multiple different frequencies, in various factor‐MIDAS prediction models. Our key empirical findings as follows. (i) When using real‐time data, factor‐MIDAS prediction models outperform various linear benchmark models. Interestingly, the “MSFE‐best” MIDAS models contain no autoregressive (AR) lag terms when backcasting and nowcasting. AR terms only begin to play a role in “true” forecasting contexts. (ii) Models that utilize only one or two factors are “MSFE‐best” at all forecasting horizons, but not at any backcasting and nowcasting horizons. In these latter contexts, much more heavily parametrized models with many factors are preferred. (iii) Real‐time data are crucial for forecasting Korean gross domestic product, and the use of “first available” versus “most recent” data “strongly” affects model selection and performance. (iv) Recursively estimated models are almost always “MSFE‐best,” and models estimated using autoregressive interpolation dominate those estimated using other interpolation methods. (v) Factors estimated using recursive principal component estimation methods have more predictive content than those estimated using a variety of other (more sophisticated) approaches. This result is particularly prevalent for our “MSFE‐best” factor‐MIDAS models, across virtually all forecast horizons, estimation schemes, and data vintages that are analyzed.  相似文献   

19.
This paper investigates the forecasting ability of four different GARCH models and the Kalman filter method. The four GARCH models applied are the bivariate GARCH, BEKK GARCH, GARCH-GJR and the GARCH-X model. The paper also compares the forecasting ability of the non-GARCH model: the Kalman method. Forecast errors based on 20 UK company daily stock return (based on estimated time-varying beta) forecasts are employed to evaluate out-of-sample forecasting ability of both GARCH models and Kalman method. Measures of forecast errors overwhelmingly support the Kalman filter approach. Among the GARCH models the GJR model appears to provide somewhat more accurate forecasts than the other bivariate GARCH models. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
This study empirically examines the role of macroeconomic and stock market variables in the dynamic Nelson–Siegel framework with the purpose of fitting and forecasting the term structure of interest rate on the Japanese government bond market. The Nelson–Siegel type models in state‐space framework considerably outperform the benchmark simple time series forecast models such as an AR(1) and a random walk. The yields‐macro model incorporating macroeconomic factors leads to a better in‐sample fit of the term structure than the yields‐only model. The out‐of‐sample predictability of the former for short‐horizon forecasts is superior to the latter for all maturities examined in this study, and for longer horizons the former is still compatible to the latter. Inclusion of macroeconomic factors can dramatically reduce the autocorrelation of forecast errors, which has been a common phenomenon of statistical analysis in previous term structure models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号