首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We introduce a new methodology for forecasting, which we call signal diffusion mapping. Our approach accommodates features of real‐world financial data which have been ignored historically in existing forecasting methodologies. Our method builds upon well‐established and accepted methods from other areas of statistical analysis. We develop and adapt those models for use in forecasting. We also present tests of our model on data in which we demonstrate the efficacy of our approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
This study proposes a novel Markov regime-switching negative binomial generalized autoregressive conditional heteroskedasticity model for analyzing count data time series. We develop a likelihood-based method for parameter estimation and give the one-step-ahead forecasting algorithms for the mean, variance, and quantiles. An empirical analysis of both the U.S. initial public offering (IPO) and Chinese A-share IPO markets indicates that our method is very efficient in forecasting monthly IPO volumes and detecting hot/cold issue markets. The first-day IPO return is positively correlated with the IPO volume in a hot issue market but negatively correlated with the IPO volume in a cold issue market, in both the U.S. and Chinese IPO markets. However, the average first-day return in the previous hot issue market has a significant positive impact on the current IPO volume for only the U.S. IPO market. Our approach helps to more accurately model and understand the behavior of hot/cold IPO issue markets.  相似文献   

3.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

4.
In this paper we confirm the existence of nonlinear dynamics in a time series of airport arrivals. We subsequently propose alternative non‐parametric forecasting techniques to be used in a travel forecasting problem, emphasizing the difference between the reconstruction and learning approach. We compare the results achieved in point prediction versus sign prediction. The reconstruction approach offers better results in sign prediction and the learning approach in point prediction. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

5.
This paper studies some forms of LASSO‐type penalties in time series to reduce the dimensionality of the parameter space as well as to improve out‐of‐sample forecasting performance. In particular, we propose a method that we call WLadaLASSO (weighted lag adaptive LASSO), which assigns not only different weights to each coefficient but also further penalizes coefficients of higher‐lagged covariates. In our Monte Carlo implementation, the WLadaLASSO is superior in terms of covariate selection, parameter estimation precision and forecasting, when compared to both LASSO and adaLASSO, especially for a higher number of candidate lags and a stronger linear dependence between predictors. Empirical studies illustrate our approach for US risk premium and US inflation forecasting with good results. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
We introduce a new strategy for the prediction of linear temporal aggregates; we call it ‘hybrid’ and study its performance using asymptotic theory. This scheme consists of carrying out model parameter estimation with data sampled at the highest available frequency and the subsequent prediction with data and models aggregated according to the forecasting horizon of interest. We develop explicit expressions that approximately quantify the mean square forecasting errors associated with the different prediction schemes and that take into account the estimation error component. These approximate estimates indicate that the hybrid forecasting scheme tends to outperform the so‐called ‘all‐aggregated’ approach and, in some instances, the ‘all‐disaggregated’ strategy that is known to be optimal when model selection and estimation errors are neglected. Unlike other related approximate formulas existing in the literature, those proposed in this paper are totally explicit and require neither assumptions on the second‐order stationarity of the sample nor Monte Carlo simulations for their evaluation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Estimation problems sometimes have inherent constraints which, when used, increase efficiency. When these constraints vary over time, the Kalman filter provides a convenient method of imposing them. This paper applies the Kalman filter to the problem of estimating state (provincial) populations given annual national population and national net arrivals, together with actual state populations in census years. An advantage with this approach is that the resulting projections can be evaluated by the provision of standard errors and the quality of one step ahead predictions. With our data the method seems to perform well for population projections, but poorly for net arrivals.  相似文献   

8.
This paper proposes a new approach to forecasting intermittent demand by considering the effects of external factors. We classify intermittent demand data into two parts—zero value and nonzero value—and fit nonzero values into a mixed zero-truncated Poisson model. All the parameters in this model are obtained by an EM algorithm, which regards external factors as independent variables of a logistic regression model and log-linear regression model. We then calculate the probability of occurrence of zero value at each period and predict demand occurrence by comparing it with critical value. When demand occurs, we use the weighted average of the mixed zero-truncated Poisson model as predicted nonzero demands, which are combined with predicted demand occurrences to form the final forecasting demand series. Two performance measures are developed to assess the forecasting methods. By presenting a case study of electric power material from the State Grid Shanghai Electric Power Company in China, we show that our approach provides greater accuracy in forecasting than the Poisson model, the hurdle shifted Poisson model, the hurdle Poisson model, and Croston's method.  相似文献   

9.
This paper is a critical review of exponential smoothing since the original work by Brown and Holt in the 1950s. Exponential smoothing is based on a pragmatic approach to forecasting which is shared in this review. The aim is to develop state-of-the-art guidelines for application of the exponential smoothing methodology. The first part of the paper discusses the class of relatively simple models which rely on the Holt-Winters procedure for seasonal adjustment of the data. Next, we review general exponential smoothing (GES), which uses Fourier functions of time to model seasonality. The research is reviewed according to the following questions. What are the useful properties of these models? What parameters should be used? How should the models be initialized? After the review of model-building, we turn to problems in the maintenance of forecasting systems based on exponential smoothing. Topics in the maintenance area include the use of quality control models to detect bias in the forecast errors, adaptive parameters to improve the response to structural changes in the time series, and two-stage forecasting, whereby we use a model of the errors or some other model of the data to improve our initial forecasts. Some of the major conclusions: the parameter ranges and starting values typically used in practice are arbitrary and may detract from accuracy. The empirical evidence favours Holt's model for trends over that of Brown. A linear trend should be damped at long horizons. The empirical evidence favours the Holt-Winters approach to seasonal data over GES. It is difficult to justify GES in standard form–the equivalent ARIMA model is simpler and more efficient. The cumulative sum of the errors appears to be the most practical forecast monitoring device. There is no evidence that adaptive parameters improve forecast accuracy. In fact, the reverse may be true.  相似文献   

10.
In this paper, we present a comparison between the forecasting performances of the normalization and variance stabilization method (NoVaS) and the GARCH(1,1), EGARCH(1,1) and GJR‐GARCH(1,1) models. Hence the aim of this study is to compare the out‐of‐sample forecasting performances of the models used throughout the study and to show that the NoVaS method is better than GARCH(1,1)‐type models in the context of out‐of sample forecasting performance. We study the out‐of‐sample forecasting performances of GARCH(1,1)‐type models and NoVaS method based on generalized error distribution, unlike normal and Student's t‐distribution. Also, what makes the study different is the use of the return series, calculated logarithmically and arithmetically in terms of forecasting performance. For comparing the out‐of‐sample forecasting performances, we focused on different datasets, such as S&P 500, logarithmic and arithmetic B?ST 100 return series. The key result of our analysis is that the NoVaS method performs better out‐of‐sample forecasting performance than GARCH(1,1)‐type models. The result can offer useful guidance in model building for out‐of‐sample forecasting purposes, aimed at improving forecasting accuracy.  相似文献   

11.
In this paper we develop a semi‐parametric approach to model nonlinear relationships in serially correlated data. To illustrate the usefulness of this approach, we apply it to a set of hourly electricity load data. This approach takes into consideration the effect of temperature combined with those of time‐of‐day and type‐of‐day via nonparametric estimation. In addition, an ARIMA model is used to model the serial correlation in the data. An iterative backfitting algorithm is used to estimate the model. Post‐sample forecasting performance is evaluated and comparative results are presented. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
现有的体模型线绘制算法采用跟踪随机种子的方式得到特征线,存在严重的时空连续性问题.本文提出一种GPU加速的体模型线绘制算法,能够在每一帧抽取全部的特征线,从而彻底解决绘制中的不连续现象.我们首先提出一种并行的特征线抽取方法,在GPU的几何处理器中计算每个立方体单元中的特征线片段.为了解决可见性判定问题,我们采用splatting的方式生成深度图,取代已有方法中的光线投射方法,获得了明显的性能提升.采用自适应的深度偏移获得满意的可见性检测结果.为了将大量的体数据存储到高速显存中,我们通过剔除无关数据和压缩编码的方式,在不影响结果的前提下显著地减少了存储和带宽消耗.实验结果表明,本文算法能够生成时空连续的线绘制结果,并且绘制速度比已有CPU算法提高一个数量级.  相似文献   

13.
14.
Forecasting for inventory items with lumpy demand is difficult because of infrequent nonzero demands with high variability. This article developed two methods to forecast lumpy demand: an optimally weighted moving average method and an intelligent pattern‐seeking method. We compare them with a number of well‐referenced methods typically applied over the last 30 years in forecasting intermittent or lumpy demand. The comparison is conducted over about 200,000 forecasts (using 1‐day‐ahead and 5‐day‐ahead review periods) for 24 series of actual product demands across four different error measures. One of the most important findings of our study is that the two non‐traditional methods perform better overall than the traditional methods. We summarize results and discuss managerial implications. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
Large Bayesian vector autoregressions with the natural conjugate prior are now routinely used for forecasting and structural analysis. It has been shown that selecting the prior hyperparameters in a data-driven manner can often substantially improve forecast performance. We propose a computationally efficient method to obtain the optimal hyperparameters based on automatic differentiation, which is an efficient way to compute derivatives. Using a large US data set, we show that using the optimal hyperparameter values leads to substantially better forecast performance. Moreover, the proposed method is much faster than the conventional grid-search approach, and is applicable in high-dimensional optimization problems. The new method thus provides a practical and systematic way to develop better shrinkage priors for forecasting in a data-rich environment.  相似文献   

16.
We look at the problem of forecasting time series which are not normally distributed. An overall approach is suggested which works both on simulated data and on real data sets. The idea is intuitively attractive and has the considerable advantage that it can readily be understood by non-specialists. Our approach is based on ARMA methodology and our models are estimated via a likelihood procedure which takes into account the non-normality of the data. We examine in some detail the circumstances in which taking explicit account of the nonnormality improves the forecasting process in a significant way. Results from several simulated and real series are included.  相似文献   

17.
This research forecasts peak call volume of a centralized after‐hours call center for rural electric cooperatives to help the call center determine staffing levels. A Gaussian copula is used to capture the dependence among non‐normal distributions. Using a centralized call center reduces costs by approximately 75% compared to having individual call centers at each cooperative. Adding cooperatives to the centralized call center is projected to further decrease costs per member. An out‐of‐sample forecasting exercise after the call center expanded validated the model's forecast that additional cooperatives could be added without a proportional increase in the peak number of calls. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
This paper proposes the use of the bias‐corrected bootstrap for interval forecasting of an autoregressive time series with an arbitrary number of deterministic components. We use the bias‐corrected bootstrap based on two alternative bias‐correction methods: the bootstrap and an analytic formula based on asymptotic expansion. We also propose a new stationarity‐correction method, based on stable spectral factorization, as an alternative to Kilian's method exclusively used in past studies. A Monte Carlo experiment is conducted to compare small‐sample properties of prediction intervals. The results show that the bias‐corrected bootstrap prediction intervals proposed in this paper exhibit desirable small‐sample properties. It is also found that the bootstrap bias‐corrected prediction intervals based on stable spectral factorization are tighter and more stable than those based on Kilian's stationarity‐correction. The proposed methods are applied to interval forecasting for the number of tourist arrivals in Hong Kong. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
Including disaggregate variables or using information extracted from the disaggregate variables into a forecasting model for an economic aggregate may improve forecasting accuracy. In this paper we suggest using the boosting method to select the disaggregate variables, which are most helpful in predicting an aggregate of interest. We conduct a simulation study to investigate the variable selection ability of this method. To assess the forecasting performance a recursive pseudo‐out‐of‐sample forecasting experiment for six key euro area macroeconomic variables is conducted. The results suggest that using boosting to select relevant predictors is a feasible and competitive approach in forecasting an aggregate. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

20.
The aim of this study was to forecast the Singapore gross domestic product (GDP) growth rate by employing the mixed‐data sampling (MIDAS) approach using mixed and high‐frequency financial market data from Singapore, and to examine whether the high‐frequency financial variables could better predict the macroeconomic variables. We adopt different time‐aggregating methods to handle the high‐frequency data in order to match the sampling rate of lower‐frequency data in our regression models. Our results showed that MIDAS regression using high‐frequency stock return data produced a better forecast of GDP growth rate than the other models, and the best forecasting performance was achieved by using weekly stock returns. The forecasting result was further improved by performing intra‐period forecasting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号