首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到12条相似文献,搜索用时 15 毫秒
1.
Based on the concept of ‘decomposition and ensemble’, a novel ensemble forecasting approach is proposed for complex time series by coupling sparse representation (SR) and feedforward neural network (FNN), i.e. the SR‐based FNN approach. Three main steps are involved: data decomposition via SR, individual forecasting via FNN and ensemble forecasting via a simple addition method. In particular, to capture various coexisting hidden factors, the effective decomposition tool of SR with its unique virtues of flexibility and generalization is introduced to formulate an overcomplete dictionary covering diverse bases, e.g. exponential basis for main trend, Fourier basis for cyclical (and seasonal) features and wavelet basis for transient actions, different from other techniques with a single basis. Using crude oil price (a typical complex time series) as sample data, the empirical study statistically confirms the superiority of the SR‐based FNN method over some other popular forecasting models and similar ensemble models (with other decomposition tools). Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
This paper examined the forecasting performance of disaggregated data with spatial dependency and applied it to forecasting electricity demand in Japan. We compared the performance of the spatial autoregressive ARMA (SAR‐ARMA) model with that of the vector autoregressive (VAR) model from a Bayesian perspective. With regard to the log marginal likelihood and log predictive density, the VAR(1) model performed better than the SAR‐ARMA( 1,1) model. In the case of electricity demand in Japan, we can conclude that the VAR model with contemporaneous aggregation had better forecasting performance than the SAR‐ARMA model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
A similarity‐based classification model is proposed whereby densities of positive and negative returns in a delay‐embedded input space are estimated from a graphical representation of the data using an eigenvector centrality measure, and subsequently combined under Bayes' theorem to predict the probability of upward/downward movements. Application to directional forecasting of the daily close price of the Dow Jones Industrial Average over a 20‐year out‐of‐sample period yields performance superior to random walk and logistic regression models, and on a par with that of multilayer perceptrons. A feature of the classifier is that it is parameter free, parameters entering the model only via the measure used to determine pairwise similarity between data points. This allows intuitions about the nature of time series to be elegantly integrated into the model. The recursive nature of eigenvector centrality makes it better able to deal with sparsely populated input spaces than conventional approaches based on density estimation. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
For predicting forward default probabilities of firms, the discrete‐time forward hazard model (DFHM) is proposed. We derive maximum likelihood estimates for the parameters in DFHM. To improve its predictive power in practice, we also consider an extension of DFHM by replacing its constant coefficients of firm‐specific predictors with smooth functions of macroeconomic variables. The resulting model is called the discrete‐time varying‐coefficient forward hazard model (DVFHM). Through local maximum likelihood analysis, DVFHM is shown to be a reliable and flexible model for forward default prediction. We use real panel datasets to illustrate these two models. Using an expanding rolling window approach, our empirical results confirm that DVFHM has better and more robust out‐of‐sample performance on forward default prediction than DFHM, in the sense of yielding more accurate predicted numbers of defaults and predicted survival times. Thus DVFHM is a useful alternative for studying forward default losses in portfolios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
We develop a method to extract periodic variations in a time series that are hidden in large non‐periodic and stochastic variations. This method relies on folding the time series many times and allows direct visualization of a hidden periodic component without resorting to any fitting procedure. Applying this method to several large‐cap stock time series in Europe, Japan and the USA yields a component with periodicity of 1 year. Out‐of‐sample tests on these large‐cap time series indicate that this periodic component is able to forecast long‐term (decade) behavior for large‐cap time series. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
This paper considers the problem of forecasting high‐dimensional time series. It employs a robust clustering approach to perform classification of the component series. Each series within a cluster is assumed to follow the same model and the data are then pooled for estimation. The classification is model‐based and robust to outlier contamination. The robustness is achieved by using the intrinsic mode functions of the Hilbert–Huang transform at lower frequencies. These functions are found to be robust to outlier contamination. The paper also compares out‐of‐sample forecast performance of the proposed method with several methods available in the literature. The other forecasting methods considered include vector autoregressive models with ∕ without LASSO, group LASSO, principal component regression, and partial least squares. The proposed method is found to perform well in out‐of‐sample forecasting of the monthly unemployment rates of 50 US states. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
Global CO2 emission forecasts span such a wide range as to yield little guidance for climate policy and analysis. But global per capita emissions appear to be better constrained than total emissions, which we argue has an economic justification. Trend stationarity of per capita emissions may provide a means of characterizing the relative likelihood of global forecasts. On data spanning 1950 to 2009 a unit root hypothesis allowing for endogenous structural breaks is rejected, but adding in the 2010 observation pushes the p‐value slightly over 0.1. Since structural breaks cannot be detected at the end of sample this may simply indicate a power problem. Using Monte Carlo simulations we conclude that the lower half of a well‐known suite of IPCC emission scenarios is more likely to occur than the upper half, and the top quartile is particularly difficult to justify. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
We propose a wavelet neural network (neuro‐wavelet) model for the short‐term forecast of stock returns from high‐frequency financial data. The proposed hybrid model combines the capability of wavelets and neural networks to capture non‐stationary nonlinear attributes embedded in financial time series. A comparison study was performed on the predictive power of two econometric models and four recurrent neural network topologies. Several statistical measures were applied to the predictions and standard errors to evaluate the performance of all models. A Jordan net that used as input the coefficients resulting from a non‐decimated wavelet‐based multi‐resolution decomposition of an exogenous signal showed a consistent superior forecasting performance. Reasonable forecasting accuracy for the one‐, three‐ and five step‐ahead horizons was achieved by the proposed model. The procedure used to build the neuro‐wavelet model is reusable and can be applied to any high‐frequency financial series to specify the model characteristics associated with that particular series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
This paper compares various ways of extracting macroeconomic information from a data‐rich environment for forecasting the yield curve using the Nelson–Siegel model. Five issues in extracting factors from a large panel of macro variables are addressed; namely, selection of a subset of the available information, incorporation of the forecast objective in constructing factors, specification of a multivariate forecast objective, data grouping before constructing factors, and selection of the number of factors in a data‐driven way. Our empirical results show that each of these features helps to improve forecast accuracy, especially for the shortest and longest maturities. Factor‐augmented methods perform well in relatively volatile periods, including the crisis period in 2008–9, when simpler models do not suffice. The macroeconomic information is exploited best by partial least squares methods, with principal component methods ranking second best. Reductions of mean squared prediction errors of 20–30% are attained, compared to the Nelson–Siegel model without macro factors. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
A simple model of particular socio-economic and technical environment proves useful in forecasting and planning. The specific application forecasts air traffic at King Abdulaziz International Airport (KAIA) in Jeddah, Saudi Arabia, through a number of explanatory variables The purpose of the model is to explain and forecast the change in growth rates of passenger flow through the airport. The dynamics of passenger flow are linked to the dynamics of the oil-based economy of Saudi Arabia and the global economic and business environment.  相似文献   

12.
This paper is a counterfactual analysis investigating the consequences of the formation of a currency union for Canada and the USA: whether outputs increase and prices decrease if these countries form a currency union. We use a two‐country cointegrated model to conduct the counterfactual analysis, where the conditional forecasts are generated based on the Gaussian assumption. To deal with structural breaks and model uncertainty, conditional forecasts are generated from different models/estimation windows and the model‐averaging technique is used to combine the forecasts. We also examine the robustness of our results to parameter uncertainty using the wild bootstrap method. The results show that forming the currency union would probably boost the Canadian economy, whereas it would not have significant effects on US output or Canadian and US price levels. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号