首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 128 毫秒
1.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

2.
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
For predicting forward default probabilities of firms, the discrete‐time forward hazard model (DFHM) is proposed. We derive maximum likelihood estimates for the parameters in DFHM. To improve its predictive power in practice, we also consider an extension of DFHM by replacing its constant coefficients of firm‐specific predictors with smooth functions of macroeconomic variables. The resulting model is called the discrete‐time varying‐coefficient forward hazard model (DVFHM). Through local maximum likelihood analysis, DVFHM is shown to be a reliable and flexible model for forward default prediction. We use real panel datasets to illustrate these two models. Using an expanding rolling window approach, our empirical results confirm that DVFHM has better and more robust out‐of‐sample performance on forward default prediction than DFHM, in the sense of yielding more accurate predicted numbers of defaults and predicted survival times. Thus DVFHM is a useful alternative for studying forward default losses in portfolios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
We present a methodology for estimation, prediction, and model assessment of vector autoregressive moving-average (VARMA) models in the Bayesian framework using Markov chain Monte Carlo algorithms. The sampling-based Bayesian framework for inference allows for the incorporation of parameter restrictions, such as stationarity restrictions or zero constraints, through appropriate prior specifications. It also facilitates extensive posterior and predictive analyses through the use of numerical summary statistics and graphical displays, such as box plots and density plots for estimated parameters. We present a method for computationally feasible evaluation of the joint posterior density of the model parameters using the exact likelihood function, and discuss the use of backcasting to approximate the exact likelihood function in certain cases. We also show how to incorporate indicator variables as additional parameters for use in coefficient selection. The sampling is facilitated through a Metropolis–Hastings algorithm. Graphical techniques based on predictive distributions are used for informal model assessment. The methods are illustrated using two data sets from business and economics. The first example consists of quarterly fixed investment, disposable income, and consumption rates for West Germany, which are known to have correlation and feedback relationships between series. The second example consists of monthly revenue data from seven different geographic areas of IBM. The revenue data exhibit seasonality, strong inter-regional dependence, and feedback relationships between certain regions.© 1997 John Wiley & Sons, Ltd.  相似文献   

5.
A new forecasting method based on the concept of the profile predictive likelihood function is proposed for discrete‐valued processes. In particular, generalized autoregressive moving average (GARMA) models for Poisson distributed data are explored in detail. Highest density regions are used to construct forecasting regions. The proposed forecast estimates and regions are coherent. Large‐sample results are derived for the forecasting distribution. Numerical studies using simulations and two real data sets are used to establish the performance of the proposed forecasting method. Robustness of the proposed method to possible misspecifications in the model is also studied.  相似文献   

6.
Bankruptcy prediction methods based on a semiparametric logit model are proposed for simple random (prospective) and case–control (choice‐based; retrospective) data. The unknown parameters and prediction probabilities in the model are estimated by the local likelihood approach, and the resulting estimators are analyzed through their asymptotic biases and variances. The semiparametric bankruptcy prediction methods using these two types of data are shown to be essentially equivalent. Thus our proposed prediction model can be directly applied to data sampled from the two important designs. One real data example and simulations confirm that our prediction method is more powerful than alternatives, in the sense of yielding smaller out‐of‐sample error rates. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
This paper examined the forecasting performance of disaggregated data with spatial dependency and applied it to forecasting electricity demand in Japan. We compared the performance of the spatial autoregressive ARMA (SAR‐ARMA) model with that of the vector autoregressive (VAR) model from a Bayesian perspective. With regard to the log marginal likelihood and log predictive density, the VAR(1) model performed better than the SAR‐ARMA( 1,1) model. In the case of electricity demand in Japan, we can conclude that the VAR model with contemporaneous aggregation had better forecasting performance than the SAR‐ARMA model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
A transformation which allows Cholesky decomposition to be used to evaluate the exact likelihood function of an ARIMA model with missing data has recently been suggested. This method is extended to allow calculation of finite sample predictions of future observations. The output from the exact likelihood evaluation may also be used to estimate missing series values. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
We study the effect of parameter and model uncertainty on the left‐tail of predictive densities and in particular on VaR forecasts. To this end, we evaluate the predictive performance of several GARCH‐type models estimated via Bayesian and maximum likelihood techniques. In addition to individual models, several combination methods are considered, such as Bayesian model averaging and (censored) optimal pooling for linear, log or beta linear pools. Daily returns for a set of stock market indexes are predicted over about 13 years from the early 2000s. We find that Bayesian predictive densities improve the VaR backtest at the 1% risk level for single models and for linear and log pools. We also find that the robust VaR backtest exhibited by linear and log pools is better than the backtest of single models at the 5% risk level. Finally, the equally weighted linear pool of Bayesian predictives tends to be the best VaR forecaster in a set of 42 forecasting techniques.  相似文献   

10.
This paper shows how monthly data and forecasts can be used in a systematic way to improve the predictive accuracy of a quarterly macroeconometric model. The problem is formulated as a model pooling procedure (equivalent to non-recursive Kalman filtering) where a baseline quarterly model forecast is modified through ‘add-factors’ or ‘constant adjustments’. The procedure ‘automatically’ constructs these adjustments in a covariance-minimizing fashion to reflect the revised expectation of the quarterly model's forecast errors, conditional on the monthly information set. Results obtained using Federal Reserve Board models indicate the potential for significant reduction in forecast error variance through application of these procedures.  相似文献   

11.
The most up‐to‐date annual average daily traffic (AADT) is always required for transport model development and calibration. However, the current‐year AADT data are not always available. The short‐term traffic flow forecasting models can be used to predict the traffic flows for the current year. In this paper, two non‐parametric models, non‐parametric regression (NPR) and Gaussian maximum likelihood (GML), are chosen for short‐term traffic forecasting based on historical data collected for the annual traffic census (ATC) in Hong Kong. These models are adapted as they are more flexible and efficient in forecasting the daily vehicular flows in the Hong Kong ATC core stations (in total of 87 stations). The daily vehicular flows predicted by these models are then used to calculate the AADT of the current year, 1999. The overall prediction and comparison results show that the NPR model produces better forecasts than the GML model using the ATC data in Hong Kong. Copyright © 2006 John Wiley _ Sons, Ltd.  相似文献   

12.
Primary delays are the driving force behind delay propagation, and predicting the number of affected trains (NAT) and the total time of affected trains (TTAT) due to primary delay (PD) can provide reliable decision support for real-time train dispatching. In this paper, based on real operation data from 2015 to 2016 at several stations along the Wuhan–Guangzhou high-speed railway, NAT and TTAT influencing factors were determined after analyzing the PD propagation mechanism. The eXtreme Gradient BOOSTing (XGBOOST) algorithm was used to establish a NAT predictive model, and several machine learning methods were compared. The importance of different delayinfluencing factors was investigated. Then, the TTAT predictive model (using support vector regression (SVR) algorithms) was established based on the NAT predictive model. Results indicated that the XGBOOST algorithm performed well with the NAT predictive model, and SVR was the optimal model for TTAT prediction under the verification index (i.e., the ratio of the difference between the actual and predicted value was less than 1/2/3/4/5 min). Real operational data in 2018 were used to test the applicability of the NAT and TTAT models over time, and findings suggest that these models exhibit sound applicability over time based on XGBOOST and SVR, respectively.  相似文献   

13.
This paper investigates the trade‐off between timeliness and quality in nowcasting practices. This trade‐off arises when the frequency of the variable to be nowcast, such as gross domestic product (GDP), is quarterly, while that of the underlying panel data is monthly; and the latter contains both survey and macroeconomic data. These two categories of data have different properties regarding timeliness and quality: the survey data are timely available (but might possess less predictive power), while the macroeconomic data possess more predictive power (but are not timely available because of their publication lags). In our empirical analysis, we use a modified dynamic factor model which takes three refinements for the standard dynamic factor model of Stock and Watson (Journal of Business and Economic Statistics, 2002, 20, 147–162) into account, namely mixed frequency, preselections and cointegration among the economic variables. Our main finding from a historical nowcasting simulation based on euro area GDP is that the predictive power of the survey data depends on the economic circumstances; namely, that survey data are more useful in tranquil times, and less so in times of turmoil.  相似文献   

14.
This paper describes procedures for forecasting countries' output growth rates and medians of a set of output growth rates using Hierarchical Bayesian (HB) models. The purpose of this paper is to show how the γ‐shrinkage forecast of Zellner and Hong ( 1989 ) emerges from a hierarchical Bayesian model and to describe how the Gibbs sampler can be used to fit this model to yield possibly improved output growth rate and median output growth rate forecasts. The procedures described in this paper offer two primary methodological contributions to previous work on this topic: (1) the weights associated with widely‐used shrinkage forecasts are determined endogenously, and (2) the posterior predictive density of the future median output growth rate is obtained numerically from which optimal point and interval forecasts are calculated. Using IMF data, we find that the HB median output growth rate forecasts outperform forecasts obtained from variety of benchmark models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

15.
We investigate the forecasting ability of the most commonly used benchmarks in financial economics. We approach the usual caveats of probabilistic forecasts studies—small samples, limited models, and nonholistic validations—by performing a comprehensive comparison of 15 predictive schemes during a time period of over 21 years. All densities are evaluated in terms of their statistical consistency, local accuracy and forecasting errors. Using a new composite indicator, the integrated forecast score, we show that risk‐neutral densities outperform historical‐based predictions in terms of information content. We find that the variance gamma model generates the highest out‐of‐sample likelihood of observed prices and the lowest predictive errors, whereas the GARCH‐based GJR‐FHS delivers the most consistent forecasts across the entire density range. In contrast, lognormal densities, the Heston model, or the nonparametric Breeden–Litzenberger formula yield biased predictions and are rejected in statistical tests.  相似文献   

16.
In their seminal book Time Series Analysis: Forecasting and Control, Box and Jenkins (1976) introduce the Airline model, which is still routinely used for the modelling of economic seasonal time series. The Airline model is for a differenced time series (in levels and seasons) and constitutes a linear moving average of lagged Gaussian disturbances which depends on two coefficients and a fixed variance. In this paper a novel approach to seasonal adjustment is developed that is based on the Airline model and that accounts for outliers and breaks in time series. For this purpose we consider the canonical representation of the Airline model. It takes the model as a sum of trend, seasonal and irregular (unobserved) components which are uniquely identified as a result of the canonical decomposition. The resulting unobserved components time series model is extended by components that allow for outliers and breaks. When all components depend on Gaussian disturbances, the model can be cast in state space form and the Kalman filter can compute the exact log‐likelihood function. Related filtering and smoothing algorithms can be used to compute minimum mean squared error estimates of the unobserved components. However, the outlier and break components typically rely on heavy‐tailed densities such as the t or the mixture of normals. For this class of non‐Gaussian models, Monte Carlo simulation techniques will be used for estimation, signal extraction and seasonal adjustment. This robust approach to seasonal adjustment allows outliers to be accounted for, while keeping the underlying structures that are currently used to aid reporting of economic time series data. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
Wind power production data at temporal resolutions of a few minutes exhibit successive periods with fluctuations of various dynamic nature and magnitude, which cannot be explained (so far) by the evolution of some explanatory variable. Our proposal is to capture this regime‐switching behaviour with an approach relying on Markov‐switching autoregressive (MSAR) models. An appropriate parameterization of the model coefficients is introduced, along with an adaptive estimation method allowing accommodation of long‐term variations in the process characteristics. The objective criterion to be recursively optimized is based on penalized maximum likelihood, with exponential forgetting of past observations. MSAR models are then employed for one‐step‐ahead point forecasting of 10 min resolution time series of wind power at two large offshore wind farms. They are favourably compared against persistence and autoregressive models. It is finally shown that the main interest of MSAR models lies in their ability to generate interval/density forecasts of significantly higher skill. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
The power transformation of Box and Cox (1964) has been shown to be quite useful in short-term forecasting for the linear regression model with AR(1) dependence structure (see, for example, Lee and Lu, 1987, 1989). It is crucial to have good estimates of the power transformation and serial. correlation parameters, because they form the basis for estimating other parameters and predicting future observations. The prediction of future observations is the main focus of this paper. We propose to estimate these two parameters by minimizing the mean squared prediction errors. These estimates and the corresponding predictions compare favourably, via revs and simulated data, with those obtained by the maximum likelihood method. Similar results are also demonstrated in the repeated measurements setting.  相似文献   

19.
The vector multiplicative error model (vector MEM) is capable of analyzing and forecasting multidimensional non‐negative valued processes. Usually its parameters are estimated by generalized method of moments (GMM) and maximum likelihood (ML) methods. However, the estimations could be heavily affected by outliers. To overcome this problem, in this paper an alternative approach, the weighted empirical likelihood (WEL) method, is proposed. This method uses moment conditions as constraints and the outliers are detected automatically by performing a k‐means clustering on Oja depth values of innovations. The performance of WEL is evaluated against those of GMM and ML methods through extensive simulations, in which three different kinds of additive outliers are considered. Moreover, the robustness of WEL is demonstrated by comparing the volatility forecasts of the three methods on 10‐minute returns of the S&P 500 index. The results from both the simulations and the S&P 500 volatility forecasts have shown preferences in using the WEL method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
We propose a wavelet neural network (neuro‐wavelet) model for the short‐term forecast of stock returns from high‐frequency financial data. The proposed hybrid model combines the capability of wavelets and neural networks to capture non‐stationary nonlinear attributes embedded in financial time series. A comparison study was performed on the predictive power of two econometric models and four recurrent neural network topologies. Several statistical measures were applied to the predictions and standard errors to evaluate the performance of all models. A Jordan net that used as input the coefficients resulting from a non‐decimated wavelet‐based multi‐resolution decomposition of an exogenous signal showed a consistent superior forecasting performance. Reasonable forecasting accuracy for the one‐, three‐ and five step‐ahead horizons was achieved by the proposed model. The procedure used to build the neuro‐wavelet model is reusable and can be applied to any high‐frequency financial series to specify the model characteristics associated with that particular series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号