首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this study, new variants of genetic programming (GP), namely gene expression programming (GEP) and multi‐expression programming (MEP), are utilized to build models for bankruptcy prediction. Generalized relationships are obtained to classify samples of 136 bankrupt and non‐bankrupt Iranian corporations based on their financial ratios. An important contribution of this paper is to identify the effective predictive financial ratios on the basis of an extensive bankruptcy prediction literature review and upon a sequential feature selection analysis. The predictive performance of the GEP and MEP forecasting methods is compared with the performance of traditional statistical methods and a generalized regression neural network. The proposed GEP and MEP models are effectively capable of classifying bankrupt and non‐bankrupt firms and outperform the models developed using other methods. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
Both international and US auditing standards require auditors to evaluate the risk of bankruptcy when planning an audit and to modify their audit report if the bankruptcy risk remains high at the conclusion of the audit. Bankruptcy prediction is a problematic issue for auditors as the development of a cause–effect relationship between attributes that may cause or be related to bankruptcy and the actual occurrence of bankruptcy is difficult. Recent research indicates that auditors only signal bankruptcy in about 50% of the cases where companies subsequently declare bankruptcy. Rough sets theory is a new approach for dealing with the problem of apparent indiscernibility between objects in a set that has had a reported bankruptcy prediction accuracy ranging from 76% to 88% in two recent studies. These accuracy levels appear to be superior to auditor signalling rates, however, the two prior rough sets studies made no direct comparisons to auditor signalling rates and either employed small sample sizes or non‐current data. This study advances research in this area by comparing rough set prediction capability with actual auditor signalling rates for a large sample of United States companies from the 1991 to 1997 time period. Prior bankruptcy prediction research was carefully reviewed to identify 11 possible predictive factors which had both significant theoretical support and were present in multiple studies. These factors were expressed as variables and data for 11 variables was then obtained for 146 bankrupt United States public companies during the years 1991–1997. This sample was then matched in terms of size and industry to 145 non‐bankrupt companies from the same time period. The overall sample of 291 companies was divided into development and validation subsamples. Rough sets theory was then used to develop two different bankruptcy prediction models, each containing four variables from the 11 possible predictive variables. The rough sets theory based models achieved 61% and 68% classification accuracy on the validation sample using a progressive classification procedure involving three classification strategies. By comparison, auditors directly signalled going concern problems via opinion modifications for only 54% of the bankrupt companies. However, the auditor signalling rate for bankrupt companies increased to 66% when other opinion modifications related to going concern issues were included. In contrast with prior rough sets theory research which suggested that rough sets theory offered significant bankruptcy predictive improvements for auditors, the rough sets models developed in this research did not provide any significant comparative advantage with regard to prediction accuracy over the actual auditors' methodologies. The current research results should be fairly robust since this rough sets theory based research employed (1) a comparison of the rough sets model results to actual auditor decisions for the same companies, (2) recent data, (3) a relatively large sample size, (4) real world bankruptcy/non‐bankruptcy frequencies to develop the variable classifications, and (5) a wide range of industries and company sizes. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

3.
This paper employed sequential minimal optimization (SMO) to develop default prediction model in the US retail market. Principal components analysis is used for variable reduction purposes. Four standard credit scoring techniques—naïve Bayes, logistic regression, recursive partitioning and artificial neural network—are compared to SMO, using a sample of 195 healthy firms and 51 distressed firms over five time periods between 1994 and 2002. The five techniques perform well in predicting default particularly one year before financial distress. Furthermore, the prediction still remains sound even 5 years before default. No single methodology has the absolute best classification ability, as the model performance varies in terms of different time periods and variable groups. External influences have greater impacts on the naïve Bayes than other techniques. In terms of similarity with Moody's ranking, SMO excelled over other techniques in most of the time periods. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

4.
This study analyzes the nonlinear relationships between accounting‐based key performance indicators and the probability that the firm in question will become bankrupt or not. The analysis focuses particularly on young firms and examines whether these nonlinear relationships are affected by a firm's age. The analysis of nonlinear relationships between various predictors of bankruptcy and their interaction effects is based on a structured additive regression model and on a comprehensive data set on German firms. The results of this analysis provide empirical evidence that a firm's age has a considerable effect on how accounting‐based key performance indicators can be used to predict the likelihood that a firm will go bankrupt. More specifically, the results show that there are differences between older firms and young firms with respect to the nonlinear effects of the equity ratio, the return on assets, and the sales growth on their probability of bankruptcy.  相似文献   

5.
This paper compares the predictive ability of ARIMA models in forecasting sales revenue. Comparisons were made at both industry and firm levels. With respect to the form of the ARIMA model, a parsimonious model of the form (0, 1, 1) (0, 1, 1) was identified most frequently for firms and industries. This model was identified previously by Griffin and Watts for the earnings series, and by Moriarty and Adams for the sales series. As a parsimonious model, its predictive accuracy was quite good. However, predictive accuracy was also found to be a function of the industry. Out of the eleven industry classifications, ‘metals’ had the lowest predictive accuracy using both firmspecific and industry-specific ARIMA models.  相似文献   

6.
We propose a wavelet neural network (neuro‐wavelet) model for the short‐term forecast of stock returns from high‐frequency financial data. The proposed hybrid model combines the capability of wavelets and neural networks to capture non‐stationary nonlinear attributes embedded in financial time series. A comparison study was performed on the predictive power of two econometric models and four recurrent neural network topologies. Several statistical measures were applied to the predictions and standard errors to evaluate the performance of all models. A Jordan net that used as input the coefficients resulting from a non‐decimated wavelet‐based multi‐resolution decomposition of an exogenous signal showed a consistent superior forecasting performance. Reasonable forecasting accuracy for the one‐, three‐ and five step‐ahead horizons was achieved by the proposed model. The procedure used to build the neuro‐wavelet model is reusable and can be applied to any high‐frequency financial series to specify the model characteristics associated with that particular series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
This paper applies the Kalman filtering procedure to estimate persistent and transitory noise components of accounting earnings. Designating the transitory noise component separately (under a label such as extraordinary items) in financial reports should help users predict future earnings. If a firm has no foreknowledge of future earnings, managers can apply a filter to a firm's accounting earnings more efficiently than an interested user. If management has foreknowledge of earnings, application of a filtering algorithm can result in smoothed variables that convey information otherwise not available to users. Application of a filtering algorithm to a sample of firms revealed that a substantial number of firms exhibited a significant transitory noise component of earnings. Also, for those firms whose earnings exhibited a significant departure from the random walk process, the paper shows that filtering can be fruitfully applied to improve predictive ability.  相似文献   

8.
Empirical studies in the area of sovereign debt have used statistical models singularly to predict the probability of debt rescheduling. Unfortunately, researchers have made few efforts to test the reliability of these model predictions or to identify a superior prediction model among competing models. This paper tested neural network, OLS, and logit models' predictive abilities regarding debt rescheduling of less developed countries (LDC). All models predicted well out‐of‐sample. The results demonstrated a consistent performance of all models, indicating that researchers and practitioners can rely on neural networks or on the traditional statistical models to give useful predictions. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

9.
It has been widely accepted that many financial and economic variables are non‐linear, and neural networks can model flexible linear or non‐linear relationships among variables. The present paper deals with an important issue: Can the many studies in the finance literature evidencing predictability of stock returns by means of linear regression be improved by a neural network? We show that the predictive accuracy can be improved by a neural network, and the results largely hold out‐of‐sample. Both the neural network and linear forecasts show significant market timing ability. While the switching portfolio based on the linear forecasts outperforms the buy‐and‐hold market portfolio under all three transaction cost scenarios, the switching portfolio based on the neural network forecasts beats the market only if there is no transaction cost. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

10.
An improved classification device for bankruptcy forecasting is proposed. The proposed approach relies on mainstream classifiers whose inputs are obtained from a so‐called multinorm analysis, instead of traditional indicators such as the ROA ratio and other accounting ratios. A battery of industry norms (computed by using nonparametric quantile regressions) is obtained, and the deviations of each firm from this multinorm system are used as inputs for the classifiers. The approach is applied to predict bankruptcy on a representative sample of Spanish manufacturing firms. Results indicate that our proposal may significantly enhance predictive accuracy, both in linear and nonlinear classifiers. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
This paper investigates the impact of financial market imperfections on small and medium-sized enterprise (SME) firms' profitability by using a unique panel data of US SME firms, spanning the period 1979–2017. The data set makes use of unique information on proxies of market imperfections pertaining to each firm in the sample. First, the findings document the statistical impact of those financial market imperfections on profitability. Moreover, the forecasting exercise illustrate the superiority of the model that explicitly includes those proxies.  相似文献   

12.
We extract information on relative shopping interest from Google search volume and provide a genuine and economically meaningful approach to directly incorporate these data into a portfolio optimization technique. By generating a firm ranking based on a Google search volume metric, we can predict future sales and thus generate excess returns in a portfolio exercise. The higher the (shopping) search volume for a firm, the higher we rank the company in the optimization process. For a sample of firms in the fashion industry, our results demonstrate that shopping interest exhibits predictive content that can be exploited in a real‐time portfolio strategy yielding robust alphas around 5.5%.  相似文献   

13.
We use an investment strategy based on firm‐level capital structures. Investing in low‐leverage firms yields abnormal returns of 4.43% per annum. If an investor holds a portfolio of low‐leverage and low‐market‐to‐book‐ratio firms, abnormal returns increase to 16.18% per annum. A portfolio of low leverage and low market risk yields abnormal returns of 6.67% and a portfolio of small firms with low leverage earns 5.37% per annum. We use the Fama‐Macbeth (1973) methodology with modifications. We confirm that portfolios based on low leverage earn higher returns in longer investment horizons. Our results are robust to other risk factors and the risk class of the firm. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
This paper is concerned with modelling time series by single hidden layer feedforward neural network models. A coherent modelling strategy based on statistical inference is presented. Variable selection is carried out using simple existing techniques. The problem of selecting the number of hidden units is solved by sequentially applying Lagrange multiplier type tests, with the aim of avoiding the estimation of unidentified models. Misspecification tests are derived for evaluating an estimated neural network model. All the tests are entirely based on auxiliary regressions and are easily implemented. A small‐sample simulation experiment is carried out to show how the proposed modelling strategy works and how the misspecification tests behave in small samples. Two applications to real time series, one univariate and the other multivariate, are considered as well. Sets of one‐step‐ahead forecasts are constructed and forecast accuracy is compared with that of other nonlinear models applied to the same series. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
This paper develops a dynamic simultaneous-equations model for analyzing and forecasting the account balances in the income statement of a firm. In the model, the income statement accounts play the role of the dependent variables that are jointly determined and explained by three types of exogenous variables: non-controllable, performance, and controllable. The model is estimated by the three-stage least-squares method using annual series of data for six firms during the period from 1936 or 1950 to 1981. The immediate, delayed, and cumulated impacts of an exogenous shock on the income accounts are analyzed and the implications for managerial decisions and strategies discussed. To run as a standard of comparison for the dynamic interdependency model, the Box-Jenkins approach is also used to develop an autoregressive integrated moving average (ARIMA) model for each of the accounts. Assessing the forecasting performance of the dynamic model against a naive model and the ARIMA and Elliott-Uphoff models for the 1982-4 period beyond the estimation period, we conclude that the dynamic model is a better representation of income statement accounts of the firm and increases forecasting accuracy.  相似文献   

16.
Manpower forecasting has made significant contributions to human resource management. Due to difficulties in collecting the required data for making appropriate analysis, most studies in the literature concentrate on forecasts of individual firms. This paper presents a regression model which utilizes the data of large firms to draw inferences to the demands of other firms. More specifically, a regression model showing the negative relationship between the rank of a firm and its associated demand is fitted to the data of a number of large manufacturing firms. The area under the regression line delineated by the y-axis is then an estimate of the total demand of the whole industry. Confidence intervals for the estimate can also be constructed. As an illustration, the demand for the industrial management manpower in Taiwan is forecasted by applying the proposed model.  相似文献   

17.
The leverage effect—the correlation between an asset's return and its volatility—has played a key role in forecasting and understanding volatility and risk. While it is a long standing consensus that leverage effects exist and improve forecasts, empirical evidence puzzlingly does not show that this effect exists for many individual stocks, mischaracterizing risk, and therefore leading to poor predictive performance. We examine this puzzle, with the goal to improve density forecasts, by relaxing the assumption of linearity of the leverage effect. Nonlinear generalizations of the leverage effect are proposed within the Bayesian stochastic volatility framework in order to capture flexible leverage structures. Efficient Bayesian sequential computation is developed and implemented to estimate this effect in a practical, on-line manner. Examining 615 stocks that comprise the S&P500 and Nikkei 225, we find that our proposed nonlinear leverage effect model improves predictive performances for 89% of all stocks compared to the conventional stochastic volatility model.  相似文献   

18.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

19.
For predicting forward default probabilities of firms, the discrete‐time forward hazard model (DFHM) is proposed. We derive maximum likelihood estimates for the parameters in DFHM. To improve its predictive power in practice, we also consider an extension of DFHM by replacing its constant coefficients of firm‐specific predictors with smooth functions of macroeconomic variables. The resulting model is called the discrete‐time varying‐coefficient forward hazard model (DVFHM). Through local maximum likelihood analysis, DVFHM is shown to be a reliable and flexible model for forward default prediction. We use real panel datasets to illustrate these two models. Using an expanding rolling window approach, our empirical results confirm that DVFHM has better and more robust out‐of‐sample performance on forward default prediction than DFHM, in the sense of yielding more accurate predicted numbers of defaults and predicted survival times. Thus DVFHM is a useful alternative for studying forward default losses in portfolios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
More and more ensemble models are used to forecast business failure. It is generally known that the performance of an ensemble relies heavily on the diversity between each base classifier. To achieve diversity, this study uses kernel‐based fuzzy c‐means (KFCM) to organize firm samples and designs a hierarchical selective ensemble model for business failure prediction (BFP). First, three KFCM methods—Gaussian KFCM (GFCM), polynomial KFCM (PFCM), and Hyper‐tangent KFCM (HFCM)—are employed to partition the financial data set into three data sets. A neural network (NN) is then adopted as a basis classifier for BFP, and three sets, which are derived from three KFCM methods, are used to build three classifier pools. Next, classifiers are fused by the two‐layer hierarchical selective ensemble method. In the first layer, classifiers are ranked based on their prediction accuracy. The stepwise forward selection method is employed to selectively integrate classifiers according to their accuracy. In the second layer, three selective ensembles in the first layer are integrated again to acquire the final verdict. This study employs financial data from Chinese listed companies to conduct empirical research, and makes a comparative analysis with other ensemble models and all its component models. It is the conclusion that the two‐layer hierarchical selective ensemble is good at forecasting business failure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号