首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
In this study, new variants of genetic programming (GP), namely gene expression programming (GEP) and multi‐expression programming (MEP), are utilized to build models for bankruptcy prediction. Generalized relationships are obtained to classify samples of 136 bankrupt and non‐bankrupt Iranian corporations based on their financial ratios. An important contribution of this paper is to identify the effective predictive financial ratios on the basis of an extensive bankruptcy prediction literature review and upon a sequential feature selection analysis. The predictive performance of the GEP and MEP forecasting methods is compared with the performance of traditional statistical methods and a generalized regression neural network. The proposed GEP and MEP models are effectively capable of classifying bankrupt and non‐bankrupt firms and outperform the models developed using other methods. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
Auditors must assess their clients' ability to function as a going concern for at least the year following the financial statement date. The audit profession has been severely criticized for failure to ‘blow the whistle’ in numerous highly visible bankruptcies that occurred shortly after unmodified audit opinions were issued. Financial distress indicators examined in this study are one mechanism for making such assessments. This study measures and compares the predictive accuracy of an easily implemented two‐variable bankruptcy model originally developed using recursive partitioning on an equally proportioned data set of 202 firms. In this study, we test the predictive accuracy of this model, as well as previously developed logit and neural network models, using a realistically proportioned set of 14,212 firms' financial data covering the period 1981–1990. The previously developed recursive partitioning model had an overall accuracy for all firms ranging from 95 to 97% which outperformed both the logit model at 93 to 94% and the neural network model at 86 to 91%. The recursive partitioning model predicted the bankrupt firms with 33–58% accuracy. A sensitivity analysis of recursive partitioning cutting points indicated that a newly specified model could achieve an all firm and a bankrupt firm predictive accuracy of approximately 85%. Auditors will be interested in the Type I and Type II error tradeoffs revealed in a detailed sensitivity table for this easily implemented model. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
An improved classification device for bankruptcy forecasting is proposed. The proposed approach relies on mainstream classifiers whose inputs are obtained from a so‐called multinorm analysis, instead of traditional indicators such as the ROA ratio and other accounting ratios. A battery of industry norms (computed by using nonparametric quantile regressions) is obtained, and the deviations of each firm from this multinorm system are used as inputs for the classifiers. The approach is applied to predict bankruptcy on a representative sample of Spanish manufacturing firms. Results indicate that our proposal may significantly enhance predictive accuracy, both in linear and nonlinear classifiers. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
We extend Ohlson's (1995) model and examine the relationship between returns and residual income that incorporate analysts' earnings forecasts and other non‐earnings information variables in the balance sheet, namely default probability and agency cost of a debt covenant contract. We further divide the sample based on bankruptcy (agency) costs, earnings components and growth opportunities of a firm to explore how these factors affect the returns–residual income link. We find that the relative predictive ability for contemporaneous stock price by considering other earnings and non‐earnings information is better than that of models without non‐earnings information. If the bankruptcy (agency) cost of a firm is higher, its information role in the firm's equity valuation becomes more important and the accuracy of price prediction is therefore higher. As for non‐earnings information, if bankruptcy (agency) cost is lower, the information role becomes more relevant, and the earnings response coefficient is hence higher. Moreover, the decomposition of unexpected residual income into permanent and transitory components induces more information than that of the unexpected residual income alone. The permanent component has a larger impact than the transitory component in explaining abnormal returns. The market and industry properties and growth opportunity also have incremental explanatory power in valuation. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
Both international and US auditing standards require auditors to evaluate the risk of bankruptcy when planning an audit and to modify their audit report if the bankruptcy risk remains high at the conclusion of the audit. Bankruptcy prediction is a problematic issue for auditors as the development of a cause–effect relationship between attributes that may cause or be related to bankruptcy and the actual occurrence of bankruptcy is difficult. Recent research indicates that auditors only signal bankruptcy in about 50% of the cases where companies subsequently declare bankruptcy. Rough sets theory is a new approach for dealing with the problem of apparent indiscernibility between objects in a set that has had a reported bankruptcy prediction accuracy ranging from 76% to 88% in two recent studies. These accuracy levels appear to be superior to auditor signalling rates, however, the two prior rough sets studies made no direct comparisons to auditor signalling rates and either employed small sample sizes or non‐current data. This study advances research in this area by comparing rough set prediction capability with actual auditor signalling rates for a large sample of United States companies from the 1991 to 1997 time period. Prior bankruptcy prediction research was carefully reviewed to identify 11 possible predictive factors which had both significant theoretical support and were present in multiple studies. These factors were expressed as variables and data for 11 variables was then obtained for 146 bankrupt United States public companies during the years 1991–1997. This sample was then matched in terms of size and industry to 145 non‐bankrupt companies from the same time period. The overall sample of 291 companies was divided into development and validation subsamples. Rough sets theory was then used to develop two different bankruptcy prediction models, each containing four variables from the 11 possible predictive variables. The rough sets theory based models achieved 61% and 68% classification accuracy on the validation sample using a progressive classification procedure involving three classification strategies. By comparison, auditors directly signalled going concern problems via opinion modifications for only 54% of the bankrupt companies. However, the auditor signalling rate for bankrupt companies increased to 66% when other opinion modifications related to going concern issues were included. In contrast with prior rough sets theory research which suggested that rough sets theory offered significant bankruptcy predictive improvements for auditors, the rough sets models developed in this research did not provide any significant comparative advantage with regard to prediction accuracy over the actual auditors' methodologies. The current research results should be fairly robust since this rough sets theory based research employed (1) a comparison of the rough sets model results to actual auditor decisions for the same companies, (2) recent data, (3) a relatively large sample size, (4) real world bankruptcy/non‐bankruptcy frequencies to develop the variable classifications, and (5) a wide range of industries and company sizes. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

6.
This paper applies the Kalman filtering procedure to estimate persistent and transitory noise components of accounting earnings. Designating the transitory noise component separately (under a label such as extraordinary items) in financial reports should help users predict future earnings. If a firm has no foreknowledge of future earnings, managers can apply a filter to a firm's accounting earnings more efficiently than an interested user. If management has foreknowledge of earnings, application of a filtering algorithm can result in smoothed variables that convey information otherwise not available to users. Application of a filtering algorithm to a sample of firms revealed that a substantial number of firms exhibited a significant transitory noise component of earnings. Also, for those firms whose earnings exhibited a significant departure from the random walk process, the paper shows that filtering can be fruitfully applied to improve predictive ability.  相似文献   

7.
Building on recent and growing evidence that geographic location influences information diffusion, this paper examines the relation between firm's location and the predictability of stock returns. We hypothesize that returns on a portfolio composed of firms located in central areas are more likely to follow a random walk than returns on a portfolio composed of firms located in remote areas. Using a battery of variance ratio tests, we find strong and robust support for our prediction. In particular, we show that the returns on a portfolio composed of the 500 largest urban firms follow a random walk; however, all variance ratio tests reject the random walk hypothesis for a portfolio that includes the 500 largest rural firms. Our results are robust to alternative definitions of firm's location and portfolio formation.  相似文献   

8.
We use an investment strategy based on firm‐level capital structures. Investing in low‐leverage firms yields abnormal returns of 4.43% per annum. If an investor holds a portfolio of low‐leverage and low‐market‐to‐book‐ratio firms, abnormal returns increase to 16.18% per annum. A portfolio of low leverage and low market risk yields abnormal returns of 6.67% and a portfolio of small firms with low leverage earns 5.37% per annum. We use the Fama‐Macbeth (1973) methodology with modifications. We confirm that portfolios based on low leverage earn higher returns in longer investment horizons. Our results are robust to other risk factors and the risk class of the firm. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
In the era of Basel II a powerful tool for bankruptcy prognosis is vital for banks. The tool must be precise but also easily adaptable to the bank's objectives regarding the relation of false acceptances (Type I error) and false rejections (Type II error). We explore the suitability of smooth support vector machines (SSVM), and investigate how important factors such as the selection of appropriate accounting ratios (predictors), length of training period and structure of the training sample influence the precision of prediction. Moreover, we show that oversampling can be employed to control the trade‐off between error types, and we compare SSVM with both logistic and discriminant analysis. Finally, we illustrate graphically how different models can be used jointly to support the decision‐making process of loan officers. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
We extract information on relative shopping interest from Google search volume and provide a genuine and economically meaningful approach to directly incorporate these data into a portfolio optimization technique. By generating a firm ranking based on a Google search volume metric, we can predict future sales and thus generate excess returns in a portfolio exercise. The higher the (shopping) search volume for a firm, the higher we rank the company in the optimization process. For a sample of firms in the fashion industry, our results demonstrate that shopping interest exhibits predictive content that can be exploited in a real‐time portfolio strategy yielding robust alphas around 5.5%.  相似文献   

11.
The implication of corporate bankruptcy prediction is important to financial institutions when making lending decisions. In related studies, many bankruptcy prediction models have been developed based on some machine‐learning techniques. This paper presents a meta‐learning framework, which is composed of two‐level classifiers for bankruptcy prediction. The first‐level multiple classifiers perform the data reduction task by filtering out unrepresentative training data. Then, the outputs of the first‐level classifiers are utilized to create the second‐level single (meta) classifier. The experiments are based on five related datasets and the results show that the proposed meta‐learning framework provides higher prediction accuracy rates and lower type I/II errors when compared with the stacked generalization classifier and other three widely developed baselines, such as neural networks, decision trees, and logistic regression. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
Prediction of demand is a key component within supply chain management. Improved accuracy in forecasts directly affects all levels of the supply chain, reducing stock costs and increasing customer satisfaction. In many application areas, demand prediction relies on statistical software which provides an initial forecast subsequently modified by the expert's judgment. This paper outlines a new methodology based on state‐dependent parameter (SDP) estimation techniques to identify the nonlinear behaviour of such managerial adjustments. This non‐parametric SDP estimate is used as a guideline to propose a nonlinear model that corrects the bias introduced by the managerial adjustments. One‐step‐ahead forecasts of stock‐keeping unit sales sampled monthly from a manufacturing company are utilized to test the proposed methodology. The results indicate that adjustments introduce a nonlinear pattern, undermining accuracy. This understanding can be used to enhance the design of the forecasting support system in order to help forecasters towards more efficient judgmental adjustments. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
This study examines whether simple measures of Canadian equity and housing price misalignments contain leading information about output growth and inflation. Previous authors have generally found that the information content of asset prices in general, and equity and housing prices in particular, are unreliable in that they do not systematically predict future economic activity or inflation. However, earlier studies relied on simple linear relationships that would fail to pick up the potential nonlinear effects of asset price misalignments. Our results suggest that housing prices are useful for predicting GDP growth, even within a linear context. Meanwhile, both stock and housing prices can improve inflation forecasts, especially when using a threshold specification. These improvements in forecast performance are relative to the information contained in Phillips‐curve type indicators for inflation and IS‐curve type indicators for GDP growth. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
The unique institutions in Taiwan may add to our understanding of the effect of initial public offering (IPO) firm disclosures. Consistent with the notion of market mispricing, most of Taiwan's IPOs were with consecutive up‐limit hits followed by substantial price reversals. In this study, we decompose IPO underpricing into two components: pure underpricing and subsequent reversal, exploring the impact of the 1991 mandate that IPO firms should include their management forecasts in the prospectuses on these two anomaly measures. Our results support the notion that disclosure regulations ameliorate investors' mispricing the stocks. First, pure underpricing and reversal are significantly less (more) pronounced for post‐mandate (pre‐mandate) IPO stocks. In contrast, consistent with the cheap talk hypothesis, the pre‐mandate voluntary forecasters (non‐forecasters) appear to be more (less) underpriced. Second, the duration of underpricing for the post‐mandate (pre‐mandate) IPOs appears to be shorter (longer). Nevertheless, underpricing lasted relatively longer (shorter) for the pre‐mandate IPOs with (with no) voluntary disclosures. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
The use of linear error correction models based on stationarity and cointegration analysis, typically estimated with least squares regression, is a common technique for financial time series prediction. In this paper, the same formulation is extended to a nonlinear error correction model using the idea of a kernel‐based implicit nonlinear mapping to a high‐dimensional feature space in which linear model formulations are specified. Practical expressions for the nonlinear regression are obtained in terms of the positive definite kernel function by solving a linear system. The nonlinear least squares support vector machine model is designed within the Bayesian evidence framework that allows us to find appropriate trade‐offs between model complexity and in‐sample model accuracy. From straightforward primal–dual reasoning, the Bayesian framework allows us to derive error bars on the prediction in a similar way as for linear models and to perform hyperparameter and input selection. Starting from the results of the linear modelling analysis, the Bayesian kernel‐based prediction is successfully applied to out‐of‐sample prediction of an aggregated equity price index for the European chemical sector. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
This study examines whether the evaluation of a bankruptcy prediction model should take into account the total cost of misclassification. For this purpose, we introduce and apply a validity measure in credit scoring that is based on the total cost of misclassification. Specifically, we use comprehensive data from the annual financial statements of a sample of German companies and analyze the total cost of misclassification by comparing a generalized linear model and a generalized additive model with regard to their ability to predict a company's probability of default. On the basis of these data, the validity measure we introduce shows that, compared to generalized linear models, generalized additive models can reduce substantially the extent of misclassification and the total cost that this entails. The validity measure we introduce is informative and justifies the argument that generalized additive models should be preferred, although such models are more complex than generalized linear models. We conclude that to balance a model's validity and complexity, it is necessary to take into account the total cost of misclassification.  相似文献   

17.
This paper presents short‐ and long‐term composite leading indicators (CLIs) of underlying inflation for seven EU countries, namely Belgium, Germany, France, Italy, the Netherlands, Sweden and the UK. CLI and CPI reference series are calculated in terms of both growth rates and in deviations from its trend. The composite leading indicators are based on leading basic series, such as sources of inflation, series containing information on inflation expectations and prices of intermediate goods and services. Neftci's decision rule approach has been applied to transfer movements in the CLIs into a measure of the probability of a cyclical turning point, which enables the screening out of false turning point predictions. Finally, CLIs have been used to analyse the international coherence of price cycles. The forecast performance of CLIs of inflation over the past raises hope that this forecast instrument can be useful in predicting future price movements. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
Manpower forecasting has made significant contributions to human resource management. Due to difficulties in collecting the required data for making appropriate analysis, most studies in the literature concentrate on forecasts of individual firms. This paper presents a regression model which utilizes the data of large firms to draw inferences to the demands of other firms. More specifically, a regression model showing the negative relationship between the rank of a firm and its associated demand is fitted to the data of a number of large manufacturing firms. The area under the regression line delineated by the y-axis is then an estimate of the total demand of the whole industry. Confidence intervals for the estimate can also be constructed. As an illustration, the demand for the industrial management manpower in Taiwan is forecasted by applying the proposed model.  相似文献   

19.
For predicting forward default probabilities of firms, the discrete‐time forward hazard model (DFHM) is proposed. We derive maximum likelihood estimates for the parameters in DFHM. To improve its predictive power in practice, we also consider an extension of DFHM by replacing its constant coefficients of firm‐specific predictors with smooth functions of macroeconomic variables. The resulting model is called the discrete‐time varying‐coefficient forward hazard model (DVFHM). Through local maximum likelihood analysis, DVFHM is shown to be a reliable and flexible model for forward default prediction. We use real panel datasets to illustrate these two models. Using an expanding rolling window approach, our empirical results confirm that DVFHM has better and more robust out‐of‐sample performance on forward default prediction than DFHM, in the sense of yielding more accurate predicted numbers of defaults and predicted survival times. Thus DVFHM is a useful alternative for studying forward default losses in portfolios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper the relative forecast performance of nonlinear models to linear models is assessed by the conditional probability that the absolute forecast error of the nonlinear forecast is smaller than that of the linear forecast. The comparison probability is explicitly expressed and is shown to be an increasing function of the distance between nonlinear and linear forecasts under certain conditions. This expression of the comparison probability may not only be useful in determining the predictor, which is either a more accurate or a simpler forecast, to be used but also provides a good explanation for an odd phenomenon discussed by Pemberton. The relative forecast performance of a nonlinear model to a linear model is demonstrated to be sensitive to its forecast origins. A new forecast is thus proposed to improve the relative forecast performance of nonlinear models based on forecast origins. © 1997 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号