首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 15 毫秒
1.
This paper constructs a financial distress prediction model that includes not only traditional financial variables, but also several important corporate governance variables. Using data from Taiwan, the empirical results show that the best in-sample and out-of-sample prediction models should combine the financial variables with the corporate governance variables. Moreover, the prediction accuracy is higher for the models using dynamic distress threshold values than those with tradition threshold values. Most financial ratios, except for the debt ratio, are higher in financially sound companies than in financial distressed ones. With regard to the corporate governance variables, we find that the CEO/Chairman duality may not result in the outbreak of financial distress, but higher equity pledge ratios of managers (shareholding ratios by board members and insiders) positively (negatively) correlate with financial distress.  相似文献   

2.
Financial distress prediction (FDP) has been widely considered as a promising approach to reducing financial losses. While financial information comprises the traditional factors involved in FDP, nonfinancial factors have also been examined in recent studies. In light of this, the purpose of this study is to explore the integrated factors and multiple models that can improve the predictive performance of FDP models. This study proposes an FDP framework to reveal the financial distress features of listed Chinese companies, incorporating financial, management, and textual factors, and evaluating the prediction performance of multiple models in different time spans. To develop this framework, this study employs the wrapper-based feature selection method to extract valuable features, and then constructs multiple single classifiers, ensemble classifiers, and deep learning models in order to predict financial distress. The experiment results indicate that management and textual factors can supplement traditional financial factors in FDP, especially textual ones. This study also discovers that integrated factors collected 4 years prior to the predicted benchmark year enable a more accurate prediction, and the ensemble classifiers and deep learning models developed can achieve satisfactory FDP performance. This study makes a novel contribution as it expands the predictive factors of financial distress and provides new findings that can have important implications for providing early warning signals of financial risk.  相似文献   

3.
Through empirical research, it is found that the traditional autoregressive integrated moving average (ARIMA) model has a large deviation for the forecasting of high-frequency financial time series. With the improvement in storage capacity and computing power of high-frequency financial time series, this paper combines the traditional ARIMA model with the deep learning model to forecast high-frequency financial time series. It not only preserves the theoretical basis of the traditional model and characterizes the linear relationship, but also can characterize the nonlinear relationship of the error term according to the deep learning model. The empirical study of Monte Carlo numerical simulation and CSI 300 index in China show that, compared with ARIMA, support vector machine (SVM), long short-term memory (LSTM) and ARIMA-SVM models, the improved ARIMA model based on LSTM not only improves the forecasting accuracy of the single ARIMA model in both fitting and forecasting, but also reduces the computational complexity of only a single deep learning model. The improved ARIMA model based on deep learning not only enriches the models for the forecasting of time series, but also provides effective tools for high-frequency strategy design to reduce the investment risks of stock index.  相似文献   

4.
In recent years an impressive array of publications has appeared claiming considerable successes of neural networks in modelling financial data but sceptical practitioners and statisticians are still raising the question of whether neural networks really are ‘a major breakthrough or just a passing fad’. A major reason for this is the lack of procedures for performing tests for misspecified models, and tests of statistical significance for the various parameters that have been estimated, which makes it difficult to assess the model's significance and the possibility that any short‐term successes that are reported might be due to ‘data mining’. In this paper we describe a methodology for neural model identification which facilitates hypothesis testing at two levels: model adequacy and variable significance. The methodology includes a model selection procedure to produce consistent estimators, a variable selection procedure based on statistical significance and a model adequacy procedure based on residuals analysis. We propose a novel, computationally efficient scheme for estimating sampling variability of arbitrarily complex statistics for neural models and apply it to variable selection. The approach is based on sampling from the asymptotic distribution of the neural model's parameters (‘parametric sampling’). Controlled simulations are used for the analysis and evaluation of our model identification methodology. A case study in tactical asset allocation is used to demonstrate how the methodology can be applied to real‐life problems in a way analogous to stepwise forward regression analysis. Neural models are contrasted to multiple linear regression. The results indicate the presence of non‐linear relationships in modelling the equity premium. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

5.
This study examines whether the evaluation of a bankruptcy prediction model should take into account the total cost of misclassification. For this purpose, we introduce and apply a validity measure in credit scoring that is based on the total cost of misclassification. Specifically, we use comprehensive data from the annual financial statements of a sample of German companies and analyze the total cost of misclassification by comparing a generalized linear model and a generalized additive model with regard to their ability to predict a company's probability of default. On the basis of these data, the validity measure we introduce shows that, compared to generalized linear models, generalized additive models can reduce substantially the extent of misclassification and the total cost that this entails. The validity measure we introduce is informative and justifies the argument that generalized additive models should be preferred, although such models are more complex than generalized linear models. We conclude that to balance a model's validity and complexity, it is necessary to take into account the total cost of misclassification.  相似文献   

6.
The purpose of this paper is to build an alternative method of bankruptcy prediction that accounts for some deficiencies in previous approaches that resulted in poor out‐of‐sample performances. Most of the traditional approaches suffer from restrictive presumptions and structural limitations and fail to reflect the panel properties of financial statements and/or the common macroeconomic influence. Extending the work of Shumway (2001), we present a duration model with time‐varying covariates and a baseline hazard function incorporating macroeconomic dependencies. Using the proposed model, we investigate how the hazard rates of listed companies in the Korea Stock Exchange (KSE) are affected by changes in the macroeconomic environment and by time‐varying covariate vectors that show unique financial characteristics of each company. We also investigate out‐of‐sample forecasting performances of the suggested model and demonstrate improvements produced by allowing temporal and macroeconomic dependencies. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
Predicting bank failures is important as it enables bank regulators to take timely actions to prevent bank failures or reduce the cost of rescuing banks. This paper compares the logit model and data mining models in the prediction of bank failures in the USA between 2002 and 2010 using levels and rates of change of 16 financial ratios based on a cross‐section sample. The models are estimated for the in‐sample period 2002–2009, while data for the year 2010 are used for out‐of‐sample tests. The results suggest that the logit model predicts bank failures in‐sample less precisely than data mining models, but produces fewer missed failures and false alarms out‐of‐sample.  相似文献   

8.
We compare the predictive ability of Bayesian methods which deal simultaneously with model uncertainty and correlated regressors in the framework of cross‐country growth regressions. In particular, we assess methods with spike and slab priors combined with different prior specifications for the slope parameters in the slab. Our results indicate that moving away from Gaussian g‐priors towards Bayesian ridge, LASSO or elastic net specifications has clear advantages for prediction when dealing with datasets of (potentially highly) correlated regressors, a pervasive characteristic of the data used hitherto in the econometric literature. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
Scientific heritage can be found in every teaching and research institution, large or small, from universities to museums, from hospitals to secondary schools, from scientific societies to research laboratories. It is generally dispersed and vulnerable. Typically, these institutions lack the awareness, internal procedures, policies, or qualified staff to provide for its selection, preservation, and accessibility. Moreover, legislation that protects cultural heritage does not generally apply to the heritage of science. In this paper we analyse the main problems that make scientific heritage preservation so difficult to address. We discuss the concept and present existing preservation tools, including recent surveys, legislation, policies, and innovative institutional approaches. We briefly analyse two recent initiatives for the preservation of scientific heritage, at the Universities of Lisbon and Cambridge.  相似文献   

10.
Using data obtained with a dye marker and the gavage technique, the kinetics of gastrointestinal transit of different loads of sugar substitutes (maltitol, sorbitol) and sugar (sucrose) in the rat were analysed using a linear multicompartmental model over a range from the realistic to the non-physiologic high, of carbohydrate intake levels and using only a few experimental time points. The model gave detailed insight into intestinal propulsion and gastrocecal transit time. Rate constants of transport between the compartments investigated were determined; they showed characteristics which could be related to the substance and the dosage administered. Analyses of the gastrointestinal content and calculations of the intestinal net water movement showed that the digestibility and absorption of the disaccharide sugar alcohol, maltitol, in the small gut depended inversely on the dose ingested. For all substances tested, caloric availability in the small intestine was calculated. At a physiological low level of maltitol intake, the results also indicated an insignificant calorie-saving effect in comparison to sucrose, an effect based mainly on the slow absorption rate of the maltitol cleavage product sorbitol.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号