首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Case‐based reasoning (CBR) is a very effective and easily understandable method for solving real‐world problems. Business failure prediction (BFP) is a forecasting tool that helps people make more precise decisions. CBR‐based BFP is a hot topic in today's global financial crisis. Case representation is critical when forecasting business failure with CBR. This research describes a pioneer investigation on hybrid case representation by employing principal component analysis (PCA), a feature extraction method, along with stepwise multivariate discriminant analysis (MDA), a feature selection approach. In this process, sample cases are represented with all available financial ratios, i.e., features. Next, the stepwise MDA is used to select optimal features to produce a reduced‐case representation. Finally, PCA is employed to extract the final information representing the sample cases. All data signified by hybrid case representation are recorded in a case library, and the k‐nearest‐neighbor algorithm is used to make the forecasting. Thus we constructed a hybrid CBR (HCBR) by integrating hybrid case representation into the forecasting tool. We empirically tested the performance of HCBR with data collected for short‐term BFP of Chinese listed companies. Empirical results indicated that HCBR can produce more promising prediction performance than MDA, logistic regression, classical CBR, and support vector machine. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
More and more ensemble models are used to forecast business failure. It is generally known that the performance of an ensemble relies heavily on the diversity between each base classifier. To achieve diversity, this study uses kernel‐based fuzzy c‐means (KFCM) to organize firm samples and designs a hierarchical selective ensemble model for business failure prediction (BFP). First, three KFCM methods—Gaussian KFCM (GFCM), polynomial KFCM (PFCM), and Hyper‐tangent KFCM (HFCM)—are employed to partition the financial data set into three data sets. A neural network (NN) is then adopted as a basis classifier for BFP, and three sets, which are derived from three KFCM methods, are used to build three classifier pools. Next, classifiers are fused by the two‐layer hierarchical selective ensemble method. In the first layer, classifiers are ranked based on their prediction accuracy. The stepwise forward selection method is employed to selectively integrate classifiers according to their accuracy. In the second layer, three selective ensembles in the first layer are integrated again to acquire the final verdict. This study employs financial data from Chinese listed companies to conduct empirical research, and makes a comparative analysis with other ensemble models and all its component models. It is the conclusion that the two‐layer hierarchical selective ensemble is good at forecasting business failure.  相似文献   

3.
Artificial neural network (ANN) combined with signal decomposing methods is effective for long‐term streamflow time series forecasting. ANN is a kind of machine learning method utilized widely for streamflow time series, and which performs well in forecasting nonstationary time series without the need of physical analysis for complex and dynamic hydrological processes. Most studies take multiple factors determining the streamflow as inputs such as rainfall. In this study, a long‐term streamflow forecasting model depending only on the historical streamflow data is proposed. Various preprocessing techniques, including empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD) and discrete wavelet transform (DWT), are first used to decompose the streamflow time series into simple components with different timescale characteristics, and the relation between these components and the original streamflow at the next time step is analyzed by ANN. Hybrid models EMD‐ANN, EEMD‐ANN and DWT‐ANN are developed in this study for long‐term daily streamflow forecasting, and performance measures root mean square error (RMSE), mean absolute percentage error (MAPE) and Nash–Sutcliffe efficiency (NSE) indicate that the proposed EEMD‐ANN method performs better than EMD‐ANN and DWT‐ANN models, especially in high flow forecasting.  相似文献   

4.
Based on the concept of ‘decomposition and ensemble’, a novel ensemble forecasting approach is proposed for complex time series by coupling sparse representation (SR) and feedforward neural network (FNN), i.e. the SR‐based FNN approach. Three main steps are involved: data decomposition via SR, individual forecasting via FNN and ensemble forecasting via a simple addition method. In particular, to capture various coexisting hidden factors, the effective decomposition tool of SR with its unique virtues of flexibility and generalization is introduced to formulate an overcomplete dictionary covering diverse bases, e.g. exponential basis for main trend, Fourier basis for cyclical (and seasonal) features and wavelet basis for transient actions, different from other techniques with a single basis. Using crude oil price (a typical complex time series) as sample data, the empirical study statistically confirms the superiority of the SR‐based FNN method over some other popular forecasting models and similar ensemble models (with other decomposition tools). Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
This paper examined the forecasting performance of disaggregated data with spatial dependency and applied it to forecasting electricity demand in Japan. We compared the performance of the spatial autoregressive ARMA (SAR‐ARMA) model with that of the vector autoregressive (VAR) model from a Bayesian perspective. With regard to the log marginal likelihood and log predictive density, the VAR(1) model performed better than the SAR‐ARMA( 1,1) model. In the case of electricity demand in Japan, we can conclude that the VAR model with contemporaneous aggregation had better forecasting performance than the SAR‐ARMA model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
Empirical mode decomposition (EMD)‐based ensemble methods have become increasingly popular in the research field of forecasting, substantially enhancing prediction accuracy. The key factor in this type of method is the multiscale decomposition that immensely mitigates modeling complexity. Accordingly, this study probes this factor and makes further innovations from a new perspective of multiscale complexity. In particular, this study quantitatively investigates the relationship between the decomposition performance and prediction accuracy, thereby developing (1) a novel multiscale complexity measurement (for evaluating multiscale decomposition), (2) a novel optimized EMD (OEMD) (considering multiscale complexity), and (3) a novel OEMD‐based forecasting methodology (using the proposed OEMD in multiscale analysis). With crude oil and natural gas prices as samples, the empirical study statistically indicates that the forecasting capability of EMD‐based methods is highly reliant on the decomposition performance; accordingly, the proposed OEMD‐based methods considering multiscale complexity significantly outperform the benchmarks based on typical EMDs in prediction accuracy.  相似文献   

7.
The most up‐to‐date annual average daily traffic (AADT) is always required for transport model development and calibration. However, the current‐year AADT data are not always available. The short‐term traffic flow forecasting models can be used to predict the traffic flows for the current year. In this paper, two non‐parametric models, non‐parametric regression (NPR) and Gaussian maximum likelihood (GML), are chosen for short‐term traffic forecasting based on historical data collected for the annual traffic census (ATC) in Hong Kong. These models are adapted as they are more flexible and efficient in forecasting the daily vehicular flows in the Hong Kong ATC core stations (in total of 87 stations). The daily vehicular flows predicted by these models are then used to calculate the AADT of the current year, 1999. The overall prediction and comparison results show that the NPR model produces better forecasts than the GML model using the ATC data in Hong Kong. Copyright © 2006 John Wiley _ Sons, Ltd.  相似文献   

8.
We investigate the forecasting ability of the most commonly used benchmarks in financial economics. We approach the usual caveats of probabilistic forecasts studies—small samples, limited models, and nonholistic validations—by performing a comprehensive comparison of 15 predictive schemes during a time period of over 21 years. All densities are evaluated in terms of their statistical consistency, local accuracy and forecasting errors. Using a new composite indicator, the integrated forecast score, we show that risk‐neutral densities outperform historical‐based predictions in terms of information content. We find that the variance gamma model generates the highest out‐of‐sample likelihood of observed prices and the lowest predictive errors, whereas the GARCH‐based GJR‐FHS delivers the most consistent forecasts across the entire density range. In contrast, lognormal densities, the Heston model, or the nonparametric Breeden–Litzenberger formula yield biased predictions and are rejected in statistical tests.  相似文献   

9.
In this study, new variants of genetic programming (GP), namely gene expression programming (GEP) and multi‐expression programming (MEP), are utilized to build models for bankruptcy prediction. Generalized relationships are obtained to classify samples of 136 bankrupt and non‐bankrupt Iranian corporations based on their financial ratios. An important contribution of this paper is to identify the effective predictive financial ratios on the basis of an extensive bankruptcy prediction literature review and upon a sequential feature selection analysis. The predictive performance of the GEP and MEP forecasting methods is compared with the performance of traditional statistical methods and a generalized regression neural network. The proposed GEP and MEP models are effectively capable of classifying bankrupt and non‐bankrupt firms and outperform the models developed using other methods. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
This paper concentrates on comparing estimation and forecasting ability of quasi‐maximum likelihood (QML) and support vector machines (SVM) for financial data. The financial series are fitted into a family of asymmetric power ARCH (APARCH) models. As the skewness and kurtosis are common characteristics of the financial series, a skew‐t distributed innovation is assumed to model the fat tail and asymmetry. Prior research indicates that the QML estimator for the APARCH model is inefficient when the data distribution shows departure from normality, so the current paper utilizes the semi‐parametric‐based SVM method and shows that it is more efficient than the QML under the skewed Student's‐t distributed error. As the SVM is a kernel‐based technique, we further investigate its performance by applying separately a Gaussian kernel and a wavelet kernel. The results suggest that the SVM‐based method generally performs better than QML for both in‐sample and out‐of‐sample data. The outcomes also highlight the fact that the wavelet kernel outperforms the Gaussian kernel with lower forecasting error, better generation capability and more computation efficiency. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
We investigate the predictive performance of various classes of value‐at‐risk (VaR) models in several dimensions—unfiltered versus filtered VaR models, parametric versus nonparametric distributions, conventional versus extreme value distributions, and quantile regression versus inverting the conditional distribution function. By using the reality check test of White (2000), we compare the predictive power of alternative VaR models in terms of the empirical coverage probability and the predictive quantile loss for the stock markets of five Asian economies that suffered from the 1997–1998 financial crisis. The results based on these two criteria are largely compatible and indicate some empirical regularities of risk forecasts. The Riskmetrics model behaves reasonably well in tranquil periods, while some extreme value theory (EVT)‐based models do better in the crisis period. Filtering often appears to be useful for some models, particularly for the EVT models, though it could be harmful for some other models. The CaViaR quantile regression models of Engle and Manganelli (2004) have shown some success in predicting the VaR risk measure for various periods, generally more stable than those that invert a distribution function. Overall, the forecasting performance of the VaR models considered varies over the three periods before, during and after the crisis. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

12.
We propose a wavelet neural network (neuro‐wavelet) model for the short‐term forecast of stock returns from high‐frequency financial data. The proposed hybrid model combines the capability of wavelets and neural networks to capture non‐stationary nonlinear attributes embedded in financial time series. A comparison study was performed on the predictive power of two econometric models and four recurrent neural network topologies. Several statistical measures were applied to the predictions and standard errors to evaluate the performance of all models. A Jordan net that used as input the coefficients resulting from a non‐decimated wavelet‐based multi‐resolution decomposition of an exogenous signal showed a consistent superior forecasting performance. Reasonable forecasting accuracy for the one‐, three‐ and five step‐ahead horizons was achieved by the proposed model. The procedure used to build the neuro‐wavelet model is reusable and can be applied to any high‐frequency financial series to specify the model characteristics associated with that particular series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
The first purpose of this paper is to assess the short‐run forecasting capabilities of two competing financial duration models. The forecast performance of the Autoregressive Conditional Multinomial–Autoregressive Conditional Duration (ACM‐ACD) model is better than the Asymmetric Autoregressive Conditional Duration (AACD) model. However, the ACM‐ACD model is more complex in terms of the computational setting and is more sensitive to starting values. The second purpose is to examine the effects of market microstructure on the forecasting performance of the two models. The results indicate that the forecast performance of the models generally decreases as the liquidity of the stock increases, with the exception of the most liquid stocks. Furthermore, a simple filter of the raw data improves the performance of both models. Finally, the results suggest that both models capture the characteristics of the micro data very well with a minimum sample length of 20 days. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
This paper employs a non‐parametric method to forecast high‐frequency Canadian/US dollar exchange rate. The introduction of a microstructure variable, order flow, substantially improves the predictive power of both linear and non‐linear models. The non‐linear models outperform random walk and linear models based on a number of recursive out‐of‐sample forecasts. Two main criteria that are applied to evaluate model performance are root mean squared error (RMSE) and the ability to predict the direction of exchange rate moves. The artificial neural network (ANN) model is consistently better in RMSE to random walk and linear models for the various out‐of‐sample set sizes. Moreover, ANN performs better than other models in terms of percentage of correctly predicted exchange rate changes. The empirical results suggest that optimal ANN architecture is superior to random walk and any linear competing model for high‐frequency exchange rate forecasting. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
This study investigates whether human judgement can be of value to users of industrial learning curves, either alone or in conjunction with statistical models. In a laboratory setting, it compares the forecast accuracy of a statistical model and judgemental forecasts, contingent on three factors: the amount of data available prior to forecasting, the forecasting horizon, and the availability of a decision aid (projections from a fitted learning curve). The results indicate that human judgement was better than the curve forecasts overall. Despite their lack of field experience with learning curve use, 52 of the 79 subjects outperformed the curve on the set of 120 forecasts, based on mean absolute percentage error. Human performance was statistically superior to the model when few data points were available and when forecasting further into the future. These results indicate substantial potential for human judgement to improve predictive accuracy in the industrial learning‐curve context. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

16.
This paper assesses the informational content of alternative realized volatility estimators, daily range and implied volatility in multi‐period out‐of‐sample Value‐at‐Risk (VaR) predictions. We use the recently proposed Realized GARCH model combined with the skewed Student's t distribution for the innovations process and a Monte Carlo simulation approach in order to produce the multi‐period VaR estimates. Our empirical findings, based on the S&P 500 stock index, indicate that almost all realized and implied volatility measures can produce statistically and regulatory precise VaR forecasts across forecasting horizons, with the implied volatility being especially accurate in monthly VaR forecasts. The daily range produces inferior forecasting results in terms of regulatory accuracy and Basel II compliance. However, robust realized volatility measures, which are immune against microstructure noise bias or price jumps, generate superior VaR estimates in terms of capital efficiency, as they minimize the opportunity cost of capital and the Basel II regulatory capital. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
We propose an ensemble of long–short‐term memory (LSTM) neural networks for intraday stock predictions, using a large variety of technical analysis indicators as network inputs. The proposed ensemble operates in an online way, weighting the individual models proportionally to their recent performance, which allows us to deal with possible nonstationarities in an innovative way. The performance of the models is measured by area under the curve of the receiver operating characteristic. We evaluate the predictive power of our model on several US large‐cap stocks and benchmark it against lasso and ridge logistic classifiers. The proposed model is found to perform better than the benchmark models or equally weighted ensembles.  相似文献   

18.
Density forecasts for weather variables are useful for the many industries exposed to weather risk. Weather ensemble predictions are generated from atmospheric models and consist of multiple future scenarios for a weather variable. The distribution of the scenarios can be used as a density forecast, which is needed for pricing weather derivatives. We consider one to 10‐day‐ahead density forecasts provided by temperature ensemble predictions. More specifically, we evaluate forecasts of the mean and quantiles of the density. The mean of the ensemble scenarios is the most accurate forecast for the mean of the density. We use quantile regression to debias the quantiles of the distribution of the ensemble scenarios. The resultant quantile forecasts compare favourably with those from a GARCH model. These results indicate the strong potential for the use of ensemble prediction in temperature density forecasting. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
A physically based model for ground‐level ozone forecasting is evaluated for Santiago, Chile. The model predicts the daily peak ozone concentration, with the daily rise of air temperature as input variable; weekends and rainy days appear as interventions. This model was used to analyse historical data, using the Linear Transfer Function/Finite Impulse Response (LTF/FIR) formalism; the Simultaneous Transfer Function (STF) method was used to analyse several monitoring stations together. Model evaluation showed a good forecasting performance across stations—for low and high ozone impacts—with power of detection (POD) values between 70 and 100%, Heidke's Skill Scores between 40% and 70% and low false alarm rates (FAR). The model consistently outperforms a pure persistence forecast. Model performance was not sensitive to different implementation options. The model performance degrades for two‐ and three‐days ahead forecast, but is still acceptable for the purpose of developing an environmental warning system at Santiago. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

20.
For forecasting nonstationary and nonlinear energy prices time series, a novel adaptive multiscale ensemble learning paradigm incorporating ensemble empirical mode decomposition (EEMD), particle swarm optimization (PSO) and least square support vector machines (LSSVM) with kernel function prototype is developed. Firstly, the extrema symmetry expansion EEMD, which can effectively restrain the mode mixing and end effects, is used to decompose the energy price into simple modes. Secondly, by using the fine‐to‐coarse reconstruction algorithm, the high‐frequency, low‐frequency and trend components are identified. Furthermore, autoregressive integrated moving average is applicable to predicting the high‐frequency components. LSSVM is suitable for forecasting the low‐frequency and trend components. At the same time, a universal kernel function prototype is introduced for making up the drawbacks of single kernel function, which can adaptively select the optimal kernel function type and model parameters according to the specific data using the PSO algorithm. Finally, the prediction results of all the components are aggregated into the forecasting values of energy price time series. The empirical results show that, compared with the popular prediction methods, the proposed method can significantly improve the prediction accuracy of energy prices, with high accuracy both in the level and directional predictions. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号