首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
This paper develops a method for modelling binary response data in a regression model with highly unbalanced class sizes. When the class sizes are highly unbalanced and the minority class represents a rare event, conventional regression analysis, i.e. logistic regression models, could underestimate the probability of the rare event. To overcome this drawback, we introduce a flexible skewed link function based on the quantile function of the generalized extreme value (GEV) distribution in a generalized additive model (GAM). The proposed model is known as generalized extreme value additive (GEVA) regression model, and a modified version of the local scoring algorithm is suggested to estimate it. We apply the proposed model to a dataset on Italian small and medium enterprises (SMEs) to estimate the default probability of SMEs. Our proposal performs better than the logistic (linear or additive) model in terms of predictive accuracy. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

2.
We assess Cartwright's models for probabilistic causality and, in particular, her models for EPR-like experiments of quantum mechanics. Our first objection is that, contrary to econometric linear models, her quasi-linear models do not allow for the unique estimation of parameters. We next argue that although, as Cartwright proves, Reichenbach's screening-off condition has only limited validity, her generalized condition is not empirically applicable. Finally, we show that her models for the EPR are mathematically incorrect and physically implausible.  相似文献   

3.
Asymmetry has been well documented in the business cycle literature. The asymmetric business cycle suggests that major macroeconomic series, such as a country's unemployment rate, are non‐linear and, therefore, the use of linear models to explain their behaviour and forecast their future values may not be appropriate. Many researchers have focused on providing evidence for the non‐linearity in the unemployment series. Only recently have there been some developments in applying non‐linear models to estimate and forecast unemployment rates. A major concern of non‐linear modelling is the model specification problem; it is very hard to test all possible non‐linear specifications, and to select the most appropriate specification for a particular model. Artificial neural network (ANN) models provide a solution to the difficulty of forecasting unemployment over the asymmetric business cycle. ANN models are non‐linear, do not rely upon the classical regression assumptions, are capable of learning the structure of all kinds of patterns in a data set with a specified degree of accuracy, and can then use this structure to forecast future values of the data. In this paper, we apply two ANN models, a back‐propagation model and a generalized regression neural network model to estimate and forecast post‐war aggregate unemployment rates in the USA, Canada, UK, France and Japan. We compare the out‐of‐sample forecast results obtained by the ANN models with those obtained by several linear and non‐linear times series models currently used in the literature. It is shown that the artificial neural network models are able to forecast the unemployment series as well as, and in some cases better than, the other univariate econometrics time series models in our test. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we detect and correct abnormal returns in 17 French stocks returns and the French index CAC40 from additive‐outlier detection method in GARCH models developed by Franses and Ghijsels (1999) and extended to innovative outliers by Charles and Darné (2005). We study the effects of outlying observations on several popular econometric tests. Moreover, we show that the parameters of the equation governing the volatility dynamics are biased when we do not take into account additive and innovative outliers. Finally, we show that the volatility forecast is better when the data are cleaned of outliers for several step‐ahead forecasts (short, medium‐ and long‐term) even if we consider a GARCH‐t process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
Based on the standard genetic programming (GP) paradigm, we introduce a new probability measure of time series' predictability. It is computed as a ratio of two fitness values (SSE) from GP runs. One value belongs to a subject series, while the other belongs to the same series after it is randomly shuffled. Theoretically, the boundaries of the measure are between zero and 100, where zero characterizes stochastic processes while 100 typifies predictable ones. To evaluate its performance, we first apply it to experimental data. It is then applied to eight Dow Jones stock returns. This measure may reduce model search space and produce more reliable forecast models. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

6.
This paper presents a methodology for modelling and forecasting multivariate time series with linear restrictions using the constrained structural state‐space framework. The model has natural applications to forecasting time series of macroeconomic/financial identities and accounts. The explicit modelling of the constraints ensures that model parameters dynamically satisfy the restrictions among items of the series, leading to more accurate and internally consistent forecasts. It is shown that the constrained model offers superior forecasting efficiency. A testable identification condition for state space models is also obtained and applied to establish the identifiability of the constrained model. The proposed methods are illustrated on Germany's quarterly monetary accounts data. Results show significant improvement in the predictive efficiency of forecast estimators for the monetary account with an overall efficiency gain of 25% over unconstrained modelling. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
Volatility models such as GARCH, although misspecified with respect to the data‐generating process, may well generate volatility forecasts that are unconditionally unbiased. In other words, they generate variance forecasts that, on average, are equal to the integrated variance. However, many applications in finance require a measure of return volatility that is a non‐linear function of the variance of returns, rather than of the variance itself. Even if a volatility model generates forecasts of the integrated variance that are unbiased, non‐linear transformations of these forecasts will be biased estimators of the same non‐linear transformations of the integrated variance because of Jensen's inequality. In this paper, we derive an analytical approximation for the unconditional bias of estimators of non‐linear transformations of the integrated variance. This bias is a function of the volatility of the forecast variance and the volatility of the integrated variance, and depends on the concavity of the non‐linear transformation. In order to estimate the volatility of the unobserved integrated variance, we employ recent results from the realized volatility literature. As an illustration, we estimate the unconditional bias for both in‐sample and out‐of‐sample forecasts of three non‐linear transformations of the integrated standard deviation of returns for three exchange rate return series, where a GARCH(1, 1) model is used to forecast the integrated variance. Our estimation results suggest that, in practice, the bias can be substantial. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

8.
Our paper challenges the conventional wisdom that the flat maximum inflicts the ‘curse of insensitivity’ on the modelling of judgement and decision processes. In particular, we argue that this widely demonstrated failure on the part of conventional statistical methods to differentiate between competing models has a useful role to play in the development of accessible and economical applied systems, since it allows a low cost choice between systems which vary in their cognitive demands on the user and in their ease of development and implementation. To illustrate our thesis, we take two recent applications of linear scoring models used for credit scoring and for the prediction of sudden infant death. The paper discusses the nature and determinants of the flat maximum as well as its role in applied cognition. Other sections mention certain unanswered questions about the development of linear scoring models and briefly describe competing formulations for prediction.  相似文献   

9.
A nonlinear geometric combination of statistical models is proposed as an alternative approach to the usual linear combination or mixture. Contrary to the linear, the geometric model is closed under the regular exponential family of distributions, as we show. As a consequence, the distribution which results from the combination is unimodal and a single location parameter can be chosen for decision making. In the case of Student t‐distributions (of particular interest in forecasting) the geometric combination can be unimodal under a sufficient condition we have established. A comparative analysis between the geometric and linear combinations of predictive distributions from three Bayesian regression dynamic linear models, in a case of beer sales forecasting in Zimbabwe, shows the geometric model to consistently outperform its linear counterpart as well as its component models. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
The dynamic linear model (DLM) with additive Gaussian errors provides a useful statistical tool that is easily implemented because of the simplicity of updating a normal model that has a natural conjugate prior. If the model is not linear or if it does not have additive Gaussian errors, then numerical methods are usually required to update the distributions of the unknown parameters. If the dimension of the parameter space is small, numerical methods are feasible. However, as the number of unknown parameters increases, the numerial methods rapidly grow in complexity and cost. This article addresses the situation where a state dependent transformation of the observations follows the DLM, but a priori the appropriate transformation is not known. The Box-Cox family, which is indexed by a single parameter, illustrates the methodology. A prior distribution is constructed over a grid of points for the transformation parameter. For each value of the grid the relevant parameter esitmates and forecasts are obtained for the transformed series. These quantities are then integrated by the current distribution of the transformation parameter. When a new observation becomes available, parallel Kalman filters are used to update the distributions of the unknown parameters and to compute the likelihood of the transformation parameter at each grid point. The distribution of the transformation parameter is then updated.  相似文献   

11.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Predicting bank failures is important as it enables bank regulators to take timely actions to prevent bank failures or reduce the cost of rescuing banks. This paper compares the logit model and data mining models in the prediction of bank failures in the USA between 2002 and 2010 using levels and rates of change of 16 financial ratios based on a cross‐section sample. The models are estimated for the in‐sample period 2002–2009, while data for the year 2010 are used for out‐of‐sample tests. The results suggest that the logit model predicts bank failures in‐sample less precisely than data mining models, but produces fewer missed failures and false alarms out‐of‐sample.  相似文献   

13.
In this paper the relative forecast performance of nonlinear models to linear models is assessed by the conditional probability that the absolute forecast error of the nonlinear forecast is smaller than that of the linear forecast. The comparison probability is explicitly expressed and is shown to be an increasing function of the distance between nonlinear and linear forecasts under certain conditions. This expression of the comparison probability may not only be useful in determining the predictor, which is either a more accurate or a simpler forecast, to be used but also provides a good explanation for an odd phenomenon discussed by Pemberton. The relative forecast performance of a nonlinear model to a linear model is demonstrated to be sensitive to its forecast origins. A new forecast is thus proposed to improve the relative forecast performance of nonlinear models based on forecast origins. © 1997 John Wiley & Sons, Ltd.  相似文献   

14.
When evaluating the launch of a new product or service, forecasts of the diffusion path and the effects of the marketing mix are critically important. Currently no unified framework exists to provide guidelines on the inclusion and specification of marketing mix variables into models of innovation diffusion. The objective of this research is to examine empirically the role of prices in diffusion models, in order to establish whether price can be incorporated effectively into the simpler time-series models. Unlike existing empirical research which examines the models' fit to historical data, we examine the predictive validity of alternative models. Only if the incorporation of prices improves the predictive performance of diffusion models can it be argued that these models have validity. A series of diffusion models which include prices are compared against a number of well-accepted diffusion models, including the Bass (1969) model, and more recently developed ‘flexible’ diffusion models. For short data series and long-lead time forecasting, the situation typical of practical situations, price rarely added to the forecasting capability of simpler time-series models. Copyright © 1998 John Wiley & Sons, Ltd.  相似文献   

15.
Value‐at‐Risk (VaR) is widely used as a tool for measuring the market risk of asset portfolios. However, alternative VaR implementations are known to yield fairly different VaR forecasts. Hence, every use of VaR requires choosing among alternative forecasting models. This paper undertakes two case studies in model selection, for the S&P 500 index and India's NSE‐50 index, at the 95% and 99% levels. We employ a two‐stage model selection procedure. In the first stage we test a class of models for statistical accuracy. If multiple models survive rejection with the tests, we perform a second stage filtering of the surviving models using subjective loss functions. This two‐stage model selection procedure does prove to be useful in choosing a VaR model, while only incompletely addressing the problem. These case studies give us some evidence about the strengths and limitations of present knowledge on estimation and testing for VaR. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

16.
A characteristic hallmark of medieval astronomy is the replacement of Ptolemy’s linear precession with so-called models of trepidation, which were deemed necessary to account for divergences between parameters and data transmitted by Ptolemy and those found by later astronomers. Trepidation is commonly thought to have dominated European astronomy from the twelfth century to the Copernican Revolution, meeting its demise only in the last quarter of the sixteenth century thanks to the observational work of Tycho Brahe. The present article seeks to challenge this picture by surveying the extent to which Latin astronomers of the late Middle Ages expressed criticisms of trepidation models or rejected their validity in favour of linear precession. It argues that a readiness to abandon trepidation was more widespread prior to Brahe than hitherto realized and that it frequently came as the result of empirical considerations. This critical attitude towards trepidation reached an early culmination point with the work of Agostino Ricci (De motu octavae spherae, 1513), who demonstrated the theory’s redundancy with a penetrating analysis of the role of observational error in Ptolemy’s Almagest.  相似文献   

17.
Most non‐linear techniques give good in‐sample fits to exchange rate data but are usually outperformed by random walks or random walks with drift when used for out‐of‐sample forecasting. In the case of regime‐switching models it is possible to understand why forecasts based on the true model can have higher mean squared error than those of a random walk or random walk with drift. In this paper we provide some analytical results for the case of a simple switching model, the segmented trend model. It requires only a small misclassification, when forecasting which regime the world will be in, to lose any advantage from knowing the correct model specification. To illustrate this we discuss some results for the DM/dollar exchange rate. We conjecture that the forecasting result is more general and describes limitations to the use of switching models for forecasting. This result has two implications. First, it questions the leading role of the random walk hypothesis for the spot exchange rate. Second, it suggests that the mean square error is not an appropriate way to evaluate forecast performance for non‐linear models. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
This paper provides extensions to the application of Markovian models in predicting US recessions. The proposed Markovian models, including the hidden Markov and Markov models, incorporate the temporal autocorrelation of binary recession indicators in a traditional but natural way. Considering interest rates and spreads, stock prices, monetary aggregates, and output as the candidate predictors, we examine the out‐of‐sample performance of the Markovian models in predicting the recessions 1–12 months ahead, through rolling window experiments as well as experiments based on the fixed full training set. Our study shows that the Markovian models are superior to the probit models in detecting a recession and capturing the recession duration. But sometimes the rolling window method may affect the models' prediction reliability as it could incorporate the economy's unsystematic adjustments and erratic shocks into the forecast. In addition, the interest rate spreads and output are the most efficient predictor variables in explaining business cycles.  相似文献   

19.
In recent years an impressive array of publications has appeared claiming considerable successes of neural networks in modelling financial data but sceptical practitioners and statisticians are still raising the question of whether neural networks really are ‘a major breakthrough or just a passing fad’. A major reason for this is the lack of procedures for performing tests for misspecified models, and tests of statistical significance for the various parameters that have been estimated, which makes it difficult to assess the model's significance and the possibility that any short‐term successes that are reported might be due to ‘data mining’. In this paper we describe a methodology for neural model identification which facilitates hypothesis testing at two levels: model adequacy and variable significance. The methodology includes a model selection procedure to produce consistent estimators, a variable selection procedure based on statistical significance and a model adequacy procedure based on residuals analysis. We propose a novel, computationally efficient scheme for estimating sampling variability of arbitrarily complex statistics for neural models and apply it to variable selection. The approach is based on sampling from the asymptotic distribution of the neural model's parameters (‘parametric sampling’). Controlled simulations are used for the analysis and evaluation of our model identification methodology. A case study in tactical asset allocation is used to demonstrate how the methodology can be applied to real‐life problems in a way analogous to stepwise forward regression analysis. Neural models are contrasted to multiple linear regression. The results indicate the presence of non‐linear relationships in modelling the equity premium. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

20.
Croston's method is widely used to predict inventory demand when it is intermittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston's method and three related methods, and we show that any underlying model will be inconsistent with the properties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号