首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Interaction of capital market participants is a complicated dynamic process. A stochastic model is proposed to describe the dynamics to predict short‐term stock price behaviors. Independent compound Poisson processes are introduced to describe the occurrences of market orders, limit orders and cancellations of limit orders, respectively. Based on high‐frequency observations of the limit order book, the maximum empirical likelihood estimator (MELE) is applied to estimate the parameters of the compound Poisson processes. Moreover, an analytical formula is derived to compute the probability distribution of the first‐passage time of a compound Poisson process. Based on this formula, the conditional probability of a price increase and the conditional distribution of the duration until the first change in mid‐price are obtained. A novel approach of short‐term stock price prediction is proposed and this methodology works reasonably well in the data analysis. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
In this article, we propose a regression model for sparse high‐dimensional data from aggregated store‐level sales data. The modeling procedure includes two sub‐models of topic model and hierarchical factor regressions. These are applied in sequence to accommodate high dimensionality and sparseness and facilitate managerial interpretation. First, the topic model is applied to aggregated data to decompose the daily aggregated sales volume of a product into sub‐sales for several topics by allocating each unit sale (“word” in text analysis) in a day (“document”) into a topic based on joint‐purchase information. This stage reduces the dimensionality of data inside topics because the topic distribution is nonuniform and product sales are mostly allocated into smaller numbers of topics. Next, the market response regression model for the topic is estimated from information about items in the same topic. The hierarchical factor regression model we introduce, based on canonical correlation analysis for original high‐dimensional sample spaces, further reduces the dimensionality within topics. Feature selection is then performed on the basis of the credible interval of the parameters' posterior density. Empirical results show that (i) our model allows managerial implications from topic‐wise market responses according to the particular context, and (ii) it performs better than do conventional category regressions in both in‐sample and out‐of‐sample forecasts.  相似文献   

3.
For predicting forward default probabilities of firms, the discrete‐time forward hazard model (DFHM) is proposed. We derive maximum likelihood estimates for the parameters in DFHM. To improve its predictive power in practice, we also consider an extension of DFHM by replacing its constant coefficients of firm‐specific predictors with smooth functions of macroeconomic variables. The resulting model is called the discrete‐time varying‐coefficient forward hazard model (DVFHM). Through local maximum likelihood analysis, DVFHM is shown to be a reliable and flexible model for forward default prediction. We use real panel datasets to illustrate these two models. Using an expanding rolling window approach, our empirical results confirm that DVFHM has better and more robust out‐of‐sample performance on forward default prediction than DFHM, in the sense of yielding more accurate predicted numbers of defaults and predicted survival times. Thus DVFHM is a useful alternative for studying forward default losses in portfolios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
A new forecasting method based on the concept of the profile predictive likelihood function is proposed for discrete‐valued processes. In particular, generalized autoregressive moving average (GARMA) models for Poisson distributed data are explored in detail. Highest density regions are used to construct forecasting regions. The proposed forecast estimates and regions are coherent. Large‐sample results are derived for the forecasting distribution. Numerical studies using simulations and two real data sets are used to establish the performance of the proposed forecasting method. Robustness of the proposed method to possible misspecifications in the model is also studied.  相似文献   

5.
Most economic variables are released with a lag, making it difficult for policy‐makers to make an accurate assessment of current conditions. This paper explores whether observing Internet browsing habits can inform practitioners about aggregate consumer behavior in an emerging market. Using data on Google search queries, we introduce an index of online interest in automobile purchases in Chile and test whether it improves the fit and efficiency of nowcasting models for automobile sales. Despite relatively low rates of Internet usage among the population, we find that models incorporating our Google Trends Automotive Index outperform benchmark specifications in both in‐sample and out‐of‐sample nowcasts, provide substantial gains in information delivery times, and are better at identifying turning points in the sales data. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
This paper presents a simple empirical approach to modeling and forecasting market option prices using localized option regressions (LOR). LOR projects market option prices over localized regions of their state space and is robust to assumptions regarding the underlying asset dynamics (e.g. log‐normality) and volatility structure. Our empirical study using 3 years of daily S&P500 options shows that LOR yields smaller out‐of‐sample pricing errors (e.g. 32% 1‐day‐out) relative to an efficient benchmark from the literature and produces option prices free of the volatility smile. In addition to being an efficient and robust option‐modeling and valuation tool for large option books, LOR provides a simple‐to‐implement empirical benchmark for evaluating more complex risk‐neutral models. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
Forecasting category or industry sales is a vital component of a company's planning and control activities. Sales for most mature durable product categories are dominated by replacement purchases. Previous sales models which explicitly incorporate a component of sales due to replacement assume there is an age distribution for replacements of existing units which remains constant over time. However, there is evidence that changes in factors such as product reliability/durability, price, repair costs, scrapping values, styling and economic conditions will result in changes in the mean replacement age of units. This paper develops a model for such time‐varying replacement behaviour and empirically tests it in the Australian automotive industry. Both longitudinal census data and the empirical analysis of the replacement sales model confirm that there has been a substantial increase in the average aggregate replacement age for motor vehicles over the past 20 years. Further, much of this variation could be explained by real price increases and a linear temporal trend. Consequently, the time‐varying model significantly outperformed previous models both in terms of fitting and forecasting the sales data. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

8.
Forecasting for a time series of low counts, such as forecasting the number of patents to be awarded to an industry, is an important research topic in socio‐economic sectors. Recently (2004), Freeland and McCabe introduced a Gaussian type stationary correlation model‐based forecasting which appears to work well for the stationary time series of low counts. In practice, however, it may happen that the time series of counts will be non‐stationary and also the series may contain over‐dispersed counts. To develop the forecasting functions for this type of non‐stationary over‐dispersed data, the paper provides an extension of the stationary correlation models for Poisson counts to the non‐stationary correlation models for negative binomial counts. The forecasting methodology appears to work well, for example, for a US time series of polio counts, whereas the existing Bayesian methods of forecasting appear to encounter serious convergence problems. Further, a simulation study is conducted to examine the performance of the proposed forecasting functions, which appear to work well irrespective of whether the time series contains small or large counts. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
Tests of forecast encompassing are used to evaluate one‐step‐ahead forecasts of S&P Composite index returns and volatility. It is found that forecasts over the 1990s made from models that include macroeconomic variables tend to be encompassed by those made from a benchmark model which does not include macroeconomic variables. However, macroeconomic variables are found to add significant information to forecasts of returns and volatility over the 1970s. Often in empirical research on forecasting stock index returns and volatility, in‐sample information criteria are used to rank potential forecasting models. Here, none of the forecasting models for the 1970s that include macroeconomic variables are, on the basis of information criteria, preferred to the relevant benchmark specification. Thus, had investors used information criteria to choose between the models used for forecasting over the 1970s considered in this paper, the predictability that tests of encompassing reveal would not have been exploited. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
This study empirically examines the role of macroeconomic and stock market variables in the dynamic Nelson–Siegel framework with the purpose of fitting and forecasting the term structure of interest rate on the Japanese government bond market. The Nelson–Siegel type models in state‐space framework considerably outperform the benchmark simple time series forecast models such as an AR(1) and a random walk. The yields‐macro model incorporating macroeconomic factors leads to a better in‐sample fit of the term structure than the yields‐only model. The out‐of‐sample predictability of the former for short‐horizon forecasts is superior to the latter for all maturities examined in this study, and for longer horizons the former is still compatible to the latter. Inclusion of macroeconomic factors can dramatically reduce the autocorrelation of forecast errors, which has been a common phenomenon of statistical analysis in previous term structure models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper we propose and test a forecasting model on monthly and daily spot prices of five selected exchange rates. In doing so, we combine a novel smoothing technique (initially applied in signal processing) with a variable selection methodology and two regression estimation methodologies from the field of machine learning (ML). After the decomposition of the original exchange rate series using an ensemble empirical mode decomposition (EEMD) method into a smoothed and a fluctuation component, multivariate adaptive regression splines (MARS) are used to select the most appropriate variable set from a large set of explanatory variables that we collected. The selected variables are then fed into two distinctive support vector machines (SVR) models that produce one‐period‐ahead forecasts for the two components. Neural networks (NN) are also considered as an alternative to SVR. The sum of the two forecast components is the final forecast of the proposed scheme. We show that the above implementation exhibits a superior in‐sample and out‐of‐sample forecasting ability when compared to alternative forecasting models. The empirical results provide evidence against the efficient market hypothesis for the selected foreign exchange markets. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
This paper considers forecasting count data from a multinomial Dirichlet distribution. The forecasting procedure implements hierarchical Bayes methods in order to develop a prior distribution for a new series of data. The methodology is applied to the redemption of cents-off promotional coupons. In a forecasting experiment, early forecasts of new series are similar to those from pooling all redemptions from previous coupon promotions. However, the hierarchical Bayes model provides realistic estimates of forecasting errors, while those for the pooled forecasts are consistently optimistic. As the current series evolves, the hierarchical Bayes forecasts adapt more rapidly and are more accurate than pooled forecasts.  相似文献   

13.
Is there a common model inherent in macroeconomic data? Macroeconomic theory suggests that market economies of various nations should share many similar dynamic patterns; as a result, individual country empirical models, for a wide variety of countries, often include the same variables. Yet, empirical studies often find important roles for idiosyncratic shocks in the differing macroeconomic performance of countries. We use forecasting criteria to examine the macrodynamic behaviour of 15 OECD countries in terms of a small set of familiar, widely used core economic variables, omitting country‐specific shocks. We find this small set of variables and a simple VAR ‘common model’ strongly support the hypothesis that many industrialized nations have similar macroeconomic dynamics. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

15.
The TFT‐LCD (thin‐film transistor–liquid crystal display) industry is one of the key global industries with products that have high clock speed. In this research, the LCD monitor market is considered for an empirical study on hierarchical forecasting (HF). The proposed HF methodology consists of five steps. First, the three hierarchical levels of the LCD monitor market are identified. Second, several exogenously driven factors that significantly affect the demand for LCD monitors are identified at each level of product hierarchy. Third, the three forecasting techniques—regression analysis, transfer function, and simultaneous equations model—are combined to forecast future demand at each hierarchical level. Fourth, various forecasting approaches and disaggregating proportion methods are adopted to obtain consistent demand forecasts at each hierarchical level. Finally, the forecast errors with different forecasting approaches are assessed in order to determine the best forecasting level and the best forecasting approach. The findings show that the best forecast results can be obtained by using the middle‐out forecasting approach. These results could guide LCD manufacturers and brand owners on ways to forecast future market demands. Copyright 2008 John Wiley & Sons, Ltd.  相似文献   

16.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

17.
This paper develops a new diffusion model that incorporates the indirect network externality. The market with indirect network externalities is characterized by two‐way interactive effects between hardware and software products on their demands. Our model incorporates two‐way interactions in forecasting the diffusion of hardware products based on a simple but realistic assumption. The new model is parsimonious, easy to estimate, and does not require more data points than the Bass diffusion model. The new diffusion model was applied to forecast sales of DVD players in the United States and in South Korea, and to the sales of Digital TV sets in Australia. When compared to the Bass and NSRL diffusion models, the new model showed better performance in forecasting long‐term sales. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
This paper addresses the issue of forecasting term structure. We provide a unified state‐space modeling framework that encompasses different existing discrete‐time yield curve models. Within such a framework we analyze the impact of two modeling choices, namely the imposition of no‐arbitrage restrictions and the size of the information set used to extract factors, on forecasting performance. Using US yield curve data, we find that both no‐arbitrage and large information sets help in forecasting but no model uniformly dominates the other. No‐arbitrage models are more useful at shorter horizons for shorter maturities. Large information sets are more useful at longer horizons and longer maturities. We also find evidence for a significant feedback from yield curve models to macroeconomic variables that could be exploited for macroeconomic forecasting. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
Recent models for credit risk management make use of hidden Markov models (HMMs). HMMs are used to forecast quantiles of corporate default rates. Little research has been done on the quality of such forecasts if the underlying HMM is potentially misspecified. In this paper, we focus on misspecification in the dynamics and dimension of the HMM. We consider both discrete‐ and continuous‐state HMMs. The differences are substantial. Underestimating the number of discrete states has an economically significant impact on forecast quality. Generally speaking, discrete models underestimate the high‐quantile default rate forecasts. Continuous‐state HMMs, however, vastly overestimate high quantiles if the true HMM has a discrete state space. In the reverse setting the biases are much smaller, though still substantial in economic terms. We illustrate the empirical differences using US default data. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
We study the effect of parameter and model uncertainty on the left‐tail of predictive densities and in particular on VaR forecasts. To this end, we evaluate the predictive performance of several GARCH‐type models estimated via Bayesian and maximum likelihood techniques. In addition to individual models, several combination methods are considered, such as Bayesian model averaging and (censored) optimal pooling for linear, log or beta linear pools. Daily returns for a set of stock market indexes are predicted over about 13 years from the early 2000s. We find that Bayesian predictive densities improve the VaR backtest at the 1% risk level for single models and for linear and log pools. We also find that the robust VaR backtest exhibited by linear and log pools is better than the backtest of single models at the 5% risk level. Finally, the equally weighted linear pool of Bayesian predictives tends to be the best VaR forecaster in a set of 42 forecasting techniques.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号