首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This study analyzes the nonlinear relationships between accounting‐based key performance indicators and the probability that the firm in question will become bankrupt or not. The analysis focuses particularly on young firms and examines whether these nonlinear relationships are affected by a firm's age. The analysis of nonlinear relationships between various predictors of bankruptcy and their interaction effects is based on a structured additive regression model and on a comprehensive data set on German firms. The results of this analysis provide empirical evidence that a firm's age has a considerable effect on how accounting‐based key performance indicators can be used to predict the likelihood that a firm will go bankrupt. More specifically, the results show that there are differences between older firms and young firms with respect to the nonlinear effects of the equity ratio, the return on assets, and the sales growth on their probability of bankruptcy.  相似文献   

2.
We propose an innovative approach to model and predict the outcome of football matches based on the Poisson autoregression with exogenous covariates (PARX) model recently proposed by Agosto, Cavaliere, Kristensen, and Rahbek (Journal of Empirical Finance, 2016, 38(B), 640–663). We show that this methodology is particularly suited to model the goal distribution of a football team and provides a good forecast performance that can be exploited to develop a profitable betting strategy. This paper improves the strand of literature on Poisson‐based models, by proposing a specification able to capture the main characteristics of goal distribution. The betting strategy is based on the idea that the odds proposed by the market do not reflect the true probability of the match because they may also incorporate the betting volumes or strategic price settings in order to exploit betters' biases. The out‐of‐sample performance of the PARX model is better than the reference approach by Dixon and Coles (Applied Statistics, 1997, 46(2), 265–280). We also evaluate our approach in a simple betting strategy, which is applied to English football Premier League data for the 2013–2014, 2014–2015, and 2015–2016 seasons. The results show that the return from the betting strategy is larger than 30% in most of the cases considered and may even exceed 100% if we consider an alternative strategy based on a predetermined threshold, which makes it possible to exploit the inefficiency of the betting market.  相似文献   

3.
Volatility models such as GARCH, although misspecified with respect to the data‐generating process, may well generate volatility forecasts that are unconditionally unbiased. In other words, they generate variance forecasts that, on average, are equal to the integrated variance. However, many applications in finance require a measure of return volatility that is a non‐linear function of the variance of returns, rather than of the variance itself. Even if a volatility model generates forecasts of the integrated variance that are unbiased, non‐linear transformations of these forecasts will be biased estimators of the same non‐linear transformations of the integrated variance because of Jensen's inequality. In this paper, we derive an analytical approximation for the unconditional bias of estimators of non‐linear transformations of the integrated variance. This bias is a function of the volatility of the forecast variance and the volatility of the integrated variance, and depends on the concavity of the non‐linear transformation. In order to estimate the volatility of the unobserved integrated variance, we employ recent results from the realized volatility literature. As an illustration, we estimate the unconditional bias for both in‐sample and out‐of‐sample forecasts of three non‐linear transformations of the integrated standard deviation of returns for three exchange rate return series, where a GARCH(1, 1) model is used to forecast the integrated variance. Our estimation results suggest that, in practice, the bias can be substantial. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
We study the performance of recently developed linear regression models for interval data when it comes to forecasting the uncertainty surrounding future stock returns. These interval data models use easy‐to‐compute daily return intervals during the modeling, estimation and forecasting stage. They have to stand up to comparable point‐data models of the well‐known capital asset pricing model type—which employ single daily returns based on successive closing prices and might allow for GARCH effects—in a comprehensive out‐of‐sample forecasting competition. The latter comprises roughly 1000 daily observations on all 30 stocks that constitute the DAX, Germany's main stock index, for a period covering both the calm market phase before and the more turbulent times during the recent financial crisis. The interval data models clearly outperform simple random walk benchmarks as well as the point‐data competitors in the great majority of cases. This result does not only hold when one‐day‐ahead forecasts of the conditional variance are considered, but is even more evident when the focus is on forecasting the width or the exact location of the next day's return interval. Regression models based on interval arithmetic thus prove to be a promising alternative to established point‐data volatility forecasting tools. Copyright ©2015 John Wiley & Sons, Ltd.  相似文献   

5.
Little Cottonwood Canyon Highway is a dead‐end, two‐lane road leading to Utah's Alta and Snowbird ski resorts. It is the only road access to these resorts and is heavily traveled during the ski season. Professional avalanche forecasters monitor this road throughout the ski season in order to make road closure decisions in the face of avalanche danger. Forecasters at the Utah Department of Transportation (UDOT) avalanche guard station at Alta have maintained an extensive daily winter database on explanatory variables relating to avalanche prediction. Whether or not an avalanche crosses the road is modeled in this paper via Bayesian additive tree methods. Utilizing daily winter data from 1995 to 2011, results show that using Bayesian tree analysis outperforms traditional statistical methods in terms of realized misclassification costs that take into consideration asymmetric losses arising from two types of error. Closing the road when an avalanche does not occur is an error harmful to resort owners, and not closing the road when one does may result in injury or death. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
This paper explains cross‐market variations in the degree of return predictability using the extreme bounds analysis (EBA). The EBA addresses model uncertainty in identifying robust determinant(s) of cross‐sectional return predictability. Additionally, the paper develops two profitable trading strategies based on return predictability evidence. The result reveals that among the 13 determinants of the cross‐sectional variation of return predictability, only value of stock traded (a measure of liquidity) is found to have robust explanatory power by Leamer's (1985) EBA. However, Sala‐i‐Martin's (1997) EBA reports that value of stock traded, gross domestic product (GDP) per capita, level of information and communication technology (ICT) development, governance quality, and corruption perception are robust determinants. We further find that a strategy of buying (selling) aggregate market portfolios of the countries with the highest positive (negative) return predictability statistic in the past 24 months generates statistically significant positive returns in the subsequent 3 to 12 months. In the individual country level, a trading rule of buying (selling) the respective country's aggregate market portfolio, when the return predictability statistic turns out positive (negative), outperforms the conventional buy‐and‐hold strategy for many countries.  相似文献   

7.
The aim of this study was to forecast the Singapore gross domestic product (GDP) growth rate by employing the mixed‐data sampling (MIDAS) approach using mixed and high‐frequency financial market data from Singapore, and to examine whether the high‐frequency financial variables could better predict the macroeconomic variables. We adopt different time‐aggregating methods to handle the high‐frequency data in order to match the sampling rate of lower‐frequency data in our regression models. Our results showed that MIDAS regression using high‐frequency stock return data produced a better forecast of GDP growth rate than the other models, and the best forecasting performance was achieved by using weekly stock returns. The forecasting result was further improved by performing intra‐period forecasting.  相似文献   

8.
This paper applies the GARCH‐MIDAS (mixed data sampling) model to examine whether information contained in macroeconomic variables can help to predict short‐term and long‐term components of the return variance. A principal component analysis is used to incorporate the information contained in different variables. Our results show that including low‐frequency macroeconomic information in the GARCH‐MIDAS model improves the prediction ability of the model, particularly for the long‐term variance component. Moreover, the GARCH‐MIDAS model augmented with the first principal component outperforms all other specifications, indicating that the constructed principal component can be considered as a good proxy of the business cycle. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
Using quantile regression this paper explores the predictability of the stock and bond return distributions as a function of economic state variables. The use of quantile regression allows us to examine specific parts of the return distribution such as the tails and the center, and for a sufficiently fine grid of quantiles we can trace out the entire distribution. A univariate quantile regression model is used to examine the marginal stock and bond return distributions, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that economic state variables predict the stock and bond return distributions in quite different ways in terms of, for example, location shifts, volatility and skewness. Comparing the different economic state variables in terms of their out‐of‐sample forecasting performance, the empirical analysis also shows that the relative accuracy of the state variables varies across the return distribution. Density forecasts based on an assumed normal distribution with forecasted mean and variance is compared to forecasts based on quantile estimates and, in general, the latter yields the best performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
The short end of the yield curve incorporates essential information to forecast central banks' decisions, but in a biased manner. This article proposes a new method to forecast the Fed and the European Central Bank's decision rate by correcting the swap rates for their cyclical economic premium, using an affine term structure model. The corrected yields offer a higher out‐of‐sample forecasting power than the yields themselves. They also deliver forecasts that are either comparable or better than those obtained with a factor‐augmented vector autoregressive model, underlining the fact that yields are likely to contain at least as much information regarding monetary policy as a dataset composed of economic data series. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper, we present a comparison between the forecasting performances of the normalization and variance stabilization method (NoVaS) and the GARCH(1,1), EGARCH(1,1) and GJR‐GARCH(1,1) models. Hence the aim of this study is to compare the out‐of‐sample forecasting performances of the models used throughout the study and to show that the NoVaS method is better than GARCH(1,1)‐type models in the context of out‐of sample forecasting performance. We study the out‐of‐sample forecasting performances of GARCH(1,1)‐type models and NoVaS method based on generalized error distribution, unlike normal and Student's t‐distribution. Also, what makes the study different is the use of the return series, calculated logarithmically and arithmetically in terms of forecasting performance. For comparing the out‐of‐sample forecasting performances, we focused on different datasets, such as S&P 500, logarithmic and arithmetic B?ST 100 return series. The key result of our analysis is that the NoVaS method performs better out‐of‐sample forecasting performance than GARCH(1,1)‐type models. The result can offer useful guidance in model building for out‐of‐sample forecasting purposes, aimed at improving forecasting accuracy.  相似文献   

12.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
This paper assesses a new technique for producing high‐frequency data from lower frequency measurements subject to the full set of identities within the data all holding. The technique is assessed through a set of Monte Carlo experiments. The example used here is gross domestic product (GDP) which is observed at quarterly intervals in the United States and it is a flow economic variable rather than a stock. The problem of constructing an unobserved monthly GDP variable can be handled using state space modelling. The solution of the problem lies in finding a suitable state space representation. A Monte Carlo experiment is conducted to illustrate this concept and to identify which variant of the model gives the best monthly estimates. The results demonstrate that the more simple models do almost as well as more complex ones and hence there may be little gain in return for the extra work of using a complex model. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

14.
We propose a new class of limited information estimators built upon an explicit trade‐off between data fitting and a priori model specification. The estimators offer the researcher a continuum of estimators that range from an extreme emphasis on data fitting and robust reduced‐form estimation to the other extreme of exact model specification and efficient estimation. The approach used to generate the estimators illustrates why ULS often outperforms 2SLS‐PRRF even in the context of a correctly specified model, provides a new interpretation of 2SLS, and integrates Wonnacott and Wonnacott's (1970) least weighted variance estimators with other techniques. We apply the new class of estimators to Klein's Model I and generate forecasts. We find for this example that an emphasis on specification (as opposed to data fitting) produces better out‐of‐sample predictions. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

15.
We show that contrasting results on trading volume's predictive role for short‐horizon reversals in stock returns can be reconciled by conditioning on different investor types' trading. Using unique trading data by investor type from Korea, we provide explicit evidence of three distinct mechanisms leading to contrasting outcomes: (i) informed buying—price increases accompanied by high institutional buying volume are less likely to reverse; (ii) liquidity selling—price declines accompanied by high institutional selling volume in institutional investor habitat are more likely to reverse; (iii) attention‐driven speculative buying—price increases accompanied by high individual buying‐volume in individual investor habitat are more likely to reverse. Our approach to predict which mechanism will prevail improves reversal forecasts following return shocks: An augmented contrarian strategy utilizing our ex ante formulation increases short‐horizon reversal strategy profitability by 40–70% in the US and Korean stock markets.  相似文献   

16.
Based on a vector error correction model we produce conditional euro area inflation forecasts. We use real‐time data on M3 and HICP, and include real GPD, the 3‐month EURIBOR and the 10‐year government bond yield as control variables. Real money growth and the term spread enter the system as stationary linear combinations. Missing and outlying values are substituted by model‐based estimates using all available data information. In general, the conditional inflation forecasts are consistent with the European Central Bank's assessment of liquidity conditions for future inflation prospects. The evaluation of inflation forecasts under different monetary scenarios reveals the importance of keeping track of money growth rate in particular at the end of 2005. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
A long‐standing puzzle to financial economists is the difficulty of outperforming the benchmark random walk model in out‐of‐sample contests. Using data from the USA over the period of 1872–2007, this paper re‐examines the out‐of‐sample predictability of real stock prices based on price–dividend (PD) ratios. The current research focuses on the significance of the time‐varying mean and nonlinear dynamics of PD ratios in the empirical analysis. Empirical results support the proposed nonlinear model of the PD ratio and the stationarity of the trend‐adjusted PD ratio. Furthermore, this paper rejects the non‐predictability hypothesis of stock prices statistically based on in‐ and out‐of‐sample tests and economically based on the criteria of expected real return per unit of risk. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
In recent years an impressive array of publications has appeared claiming considerable successes of neural networks in modelling financial data but sceptical practitioners and statisticians are still raising the question of whether neural networks really are ‘a major breakthrough or just a passing fad’. A major reason for this is the lack of procedures for performing tests for misspecified models, and tests of statistical significance for the various parameters that have been estimated, which makes it difficult to assess the model's significance and the possibility that any short‐term successes that are reported might be due to ‘data mining’. In this paper we describe a methodology for neural model identification which facilitates hypothesis testing at two levels: model adequacy and variable significance. The methodology includes a model selection procedure to produce consistent estimators, a variable selection procedure based on statistical significance and a model adequacy procedure based on residuals analysis. We propose a novel, computationally efficient scheme for estimating sampling variability of arbitrarily complex statistics for neural models and apply it to variable selection. The approach is based on sampling from the asymptotic distribution of the neural model's parameters (‘parametric sampling’). Controlled simulations are used for the analysis and evaluation of our model identification methodology. A case study in tactical asset allocation is used to demonstrate how the methodology can be applied to real‐life problems in a way analogous to stepwise forward regression analysis. Neural models are contrasted to multiple linear regression. The results indicate the presence of non‐linear relationships in modelling the equity premium. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper we develop a semi‐parametric approach to model nonlinear relationships in serially correlated data. To illustrate the usefulness of this approach, we apply it to a set of hourly electricity load data. This approach takes into consideration the effect of temperature combined with those of time‐of‐day and type‐of‐day via nonparametric estimation. In addition, an ARIMA model is used to model the serial correlation in the data. An iterative backfitting algorithm is used to estimate the model. Post‐sample forecasting performance is evaluated and comparative results are presented. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
This paper investigates the informational content of unconventional monetary policies and its effect on commodity markets, adopting a nonlinear approach for modeling volatility. The main question addressed is how the Bank of England, Bank of Japan, and European Central Bank's (ECB's) announcements concerning monetary easing affect two major commodities: gold and silver. Our empirical evidence based on daily and high‐frequency data suggests that relevant information causes ambiguous valuation adjustments as well as stabilization or destabilization effects. Specifically, there is strong evidence that the Japanese Central Bank strengthens the precious metal markets by increasing their returns and by causing stabilization effects, in contrast to the ECB, which has opposite results, mainly due to the heterogeneous expectations of investors within these markets. These asymmetries across central banks' effects on gold and silver risk–return profile imply that the ECB unconventional monetary easing informational content opposes its stated mission, adding uncertainty in precious metals markets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号