首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 953 毫秒
1.
This paper proposes a new approach to forecasting intermittent demand by considering the effects of external factors. We classify intermittent demand data into two parts—zero value and nonzero value—and fit nonzero values into a mixed zero-truncated Poisson model. All the parameters in this model are obtained by an EM algorithm, which regards external factors as independent variables of a logistic regression model and log-linear regression model. We then calculate the probability of occurrence of zero value at each period and predict demand occurrence by comparing it with critical value. When demand occurs, we use the weighted average of the mixed zero-truncated Poisson model as predicted nonzero demands, which are combined with predicted demand occurrences to form the final forecasting demand series. Two performance measures are developed to assess the forecasting methods. By presenting a case study of electric power material from the State Grid Shanghai Electric Power Company in China, we show that our approach provides greater accuracy in forecasting than the Poisson model, the hurdle shifted Poisson model, the hurdle Poisson model, and Croston's method.  相似文献   

2.
Forecasting methods are often valued by means of simulation studies. For intermittent demand items there are often very few non–zero observations, so it is hard to check any assumptions, because statistical information is often too weak to determine, for example, distribution of a variable. Therefore, it seems important to verify the forecasting methods on the basis of real data. The main aim of the article is an empirical verification of several forecasting methods applicable in case of intermittent demand. Some items are sold only in specific subperiods (in given month in each year, for example), but most forecasting methods (such as Croston's method) give non–zero forecasts for all periods. For example, summer work clothes should have non–zero forecasts only for summer months and many methods will usually provide non–zero forecasts for all months under consideration. This was the motivation for proposing and testing a new forecasting technique which can be applicable to seasonal items. In the article six methods were applied to construct separate forecasting systems: Croston's, SBA (Syntetos–Boylan Approximation), TSB (Teunter, Syntetos, Babai), MA (Moving Average), SES (Simple Exponential Smoothing) and SESAP (Simple Exponential Smoothing for Analogous subPeriods). The latter method (SESAP) is an author's proposal dedicated for companies facing the problem of seasonal items. By analogous subperiods the same subperiods in each year are understood, for example, the same months in each year. A data set from the real company was used to apply all the above forecasting procedures. That data set contained monthly time series for about nine thousand products. The forecasts accuracy was tested by means of both parametric and non–parametric measures. The scaled mean and the scaled root mean squared error were used to check biasedness and efficiency. Also, the mean absolute scaled error and the shares of best forecasts were estimated. The general conclusion is that in the analyzed company a forecasting system should be based on two forecasting methods: TSB and SESAP, but the latter method should be applied only to seasonal items (products sold only in specific subperiods). It also turned out that Croston's and SBA methods work worse than much simpler methods, such as SES or MA. The presented analysis might be helpful for enterprises facing the problem of forecasting intermittent items (and seasonal intermittent items as well).  相似文献   

3.
Theil's method can be applied to judgemental forecasts to remove systematic errors. However, under conditions of change the method can reduce the accuracy of forecasts by correcting for biases that no longer apply. In these circumstances, it may be worth applying an adaptive correction model which attaches a greater weight to more recent observations. This paper reports on the application of Theil's original method and a discounted weighted regression form of Theil's method (DWR) to the judgemental extrapolations made by 100 subjects in an experiment. Extrapolations were made for both stationary and non-stationary and low- and high-noise series. The results suggest DWR can lead to significant improvements in accuracy where the underlying time-series signal becomes more discernible over time or where the signal is subject to change. Theil's method appears to be most effective when a series has a high level of noise. However, while Theil's corrections seriously reduced the accuracy of judgemental extrapolations for some series the DWR method performed well under a wide range of conditions and never significantly degraded the original forecasts. © 1997 by John Wiley & Sons, Ltd.  相似文献   

4.
A number of researchers have developed models that use test market data to generate forecasts of a new product's performance. However, most of these models have ignored the effects of marketing covariates. In this paper we examine what impact these covariates have on a model's forecasting performance and explore whether their presence enables us to reduce the length of the model calibration period (i.e. shorten the duration of the test market). We develop from first principles a set of models that enable us to systematically explore the impact of various model ‘components’ on forecasting performance. Furthermore, we also explore the impact of the length of the test market on forecasting performance. We find that it is critically important to capture consumer heterogeneity, and that the inclusion of covariate effects can improve forecast accuracy, especially for models calibrated on fewer than 20 weeks of data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

5.
We develop a model to forecast the Federal Open Market Committee's (FOMC's) interest rate setting behavior in a nonstationary discrete choice model framework by Hu and Phillips (2004). We find that if the model selection criterion is strictly empirical, correcting for nonstationarity is extremely important, whereas it may not be an issue if one has an a priori model. Evaluating an array of models in terms of their out‐of‐sample forecasting ability, we find that those favored by the in‐sample criteria perform worst, while theory‐based models perform best. We find the best model for forecasting the FOMC's behavior is a forward‐looking Taylor rule model. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
This study examines whether the evaluation of a bankruptcy prediction model should take into account the total cost of misclassification. For this purpose, we introduce and apply a validity measure in credit scoring that is based on the total cost of misclassification. Specifically, we use comprehensive data from the annual financial statements of a sample of German companies and analyze the total cost of misclassification by comparing a generalized linear model and a generalized additive model with regard to their ability to predict a company's probability of default. On the basis of these data, the validity measure we introduce shows that, compared to generalized linear models, generalized additive models can reduce substantially the extent of misclassification and the total cost that this entails. The validity measure we introduce is informative and justifies the argument that generalized additive models should be preferred, although such models are more complex than generalized linear models. We conclude that to balance a model's validity and complexity, it is necessary to take into account the total cost of misclassification.  相似文献   

7.
At the time of Heinrich Hertz's premature death in 1894, he was regarded as one of the leading scientists of his generation. However, the posthumous publication of his treatise in the foundations of physics, Principles of Mechanics, presents a curious historical situation. Although Hertz's book was widely praised and admired, it was also met with a general sense of dissatisfaction. Almost all of Hertz's contemporaries criticized Principles for the lack of any plausible way to construct a mechanism from the “hidden masses” that are particularly characteristic of Hertz's framework. This issue seemed especially glaring given the expectation that Hertz's work might lead to a model of the underlying workings of the ether.In this paper I seek an explanation for why Hertz seemed so unperturbed by the difficulties of constructing such a mechanism. In arriving at this explanation, I explore how the development of Hertz's image-theory of representation framed the project of Principles. The image-theory brings with it an austere view of the “essential content” of mechanics, only requiring a kind of structural isomorphism between symbolic representations and target phenomena. I argue that bringing this into view makes clear why Hertz felt no need to work out the kinds of mechanisms that many of his readers looked for. Furthermore, I argue that a crucial role of Hertz's hypothesis of hidden masses has been widely overlooked. Far from acting as a proposal for the underlying structure of the ether, I show that Hertz's hypothesis ruled out knowledge of such underlying structure.  相似文献   

8.
The increasing amount of attention paid to longevity risk and funding for old age has created the need for precise mortality models and accurate future mortality forecasts. Orthogonal polynomials have been widely used in technical fields and there have also been applications in mortality modeling. In this paper we adopt a flexible functional form approach using two‐dimensional Legendre orthogonal polynomials to fit and forecast mortality rates. Unlike some of the existing mortality models in the literature, the model we propose does not impose any restrictions on the age, time or cohort structure of the data and thus allows for different model designs for different countries' mortality experience. We conduct an empirical study using male mortality data from a range of developed countries and explore the possibility of using age–time effects to capture cohort effects in the underlying mortality data. It is found that, for some countries, cohort dummies still need to be incorporated into the model. Moreover, when comparing the proposed model with well‐known mortality models in the literature, we find that our model provides comparable fitting but with a much smaller number of parameters. Based on 5‐year‐ahead mortality forecasts, it can be concluded that the proposed model improves the overall accuracy of the future mortality projection. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, we argue that, contra Strevens (2013), understanding in the sciences is sometimes partially constituted by the possession of abilities; hence, it is not (in such cases) exhausted by the understander's bearing a particular psychological or epistemic relationship to some set of structured propositions. Specifically, the case will be made that one does not really understand why a modeled phenomenon occurred unless one has the ability to actually work through (meaning run and grasp at each step) a model simulation of the underlying dynamic.  相似文献   

10.
11.
The difficulty in modelling inflation and the significance in discovering the underlying data‐generating process of inflation is expressed in an extensive literature regarding inflation forecasting. In this paper we evaluate nonlinear machine learning and econometric methodologies in forecasting US inflation based on autoregressive and structural models of the term structure. We employ two nonlinear methodologies: the econometric least absolute shrinkage and selection operator (LASSO) and the machine‐learning support vector regression (SVR) method. The SVR has never been used before in inflation forecasting considering the term spread as a regressor. In doing so, we use a long monthly dataset spanning the period 1871:1–2015:3 that covers the entire history of inflation in the US economy. For comparison purposes we also use ordinary least squares regression models as a benchmark. In order to evaluate the contribution of the term spread in inflation forecasting in different time periods, we measure the out‐of‐sample forecasting performance of all models using rolling window regressions. Considering various forecasting horizons, the empirical evidence suggests that the structural models do not outperform the autoregressive ones, regardless of the model's method. Thus we conclude that the term spread models are not more accurate than autoregressive models in inflation forecasting. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
Value‐at‐Risk (VaR) is widely used as a tool for measuring the market risk of asset portfolios. However, alternative VaR implementations are known to yield fairly different VaR forecasts. Hence, every use of VaR requires choosing among alternative forecasting models. This paper undertakes two case studies in model selection, for the S&P 500 index and India's NSE‐50 index, at the 95% and 99% levels. We employ a two‐stage model selection procedure. In the first stage we test a class of models for statistical accuracy. If multiple models survive rejection with the tests, we perform a second stage filtering of the surviving models using subjective loss functions. This two‐stage model selection procedure does prove to be useful in choosing a VaR model, while only incompletely addressing the problem. These case studies give us some evidence about the strengths and limitations of present knowledge on estimation and testing for VaR. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

13.
Johansen's test for co integration is applied to Litterman's original six-variable Bayesian vector auto regression (BVAR) model to obtain vector error correction mechanism (VECM) and Bayesian error correction (BECM) versions of the model. The Brock, Dechert, and Scheinkman (BDS) test for independence from the non-linear dynamics literature is then applied to the error structures of each estimated equation of the BECM and VECM models, plus two BVAR versions of the model. The results show that none of the models produce independent and identically distributed (IID) errors for all six equations. However, the BDS results suggest the elimination of the Bayesian prior from the BECM model, given that the univariate VECM errors are IID in five equations, compared to only two or three equations under the three Bayesian restricted models. These results combined with previous evidence regarding the superior forecasting performance of BECM over ECM models suggest future experimentation with less restrictive BVAR priors, BECM models corrected for heteroscedasticity, or hybrid specifications based on the nonlinear dynamics literature.  相似文献   

14.
In recent years an impressive array of publications has appeared claiming considerable successes of neural networks in modelling financial data but sceptical practitioners and statisticians are still raising the question of whether neural networks really are ‘a major breakthrough or just a passing fad’. A major reason for this is the lack of procedures for performing tests for misspecified models, and tests of statistical significance for the various parameters that have been estimated, which makes it difficult to assess the model's significance and the possibility that any short‐term successes that are reported might be due to ‘data mining’. In this paper we describe a methodology for neural model identification which facilitates hypothesis testing at two levels: model adequacy and variable significance. The methodology includes a model selection procedure to produce consistent estimators, a variable selection procedure based on statistical significance and a model adequacy procedure based on residuals analysis. We propose a novel, computationally efficient scheme for estimating sampling variability of arbitrarily complex statistics for neural models and apply it to variable selection. The approach is based on sampling from the asymptotic distribution of the neural model's parameters (‘parametric sampling’). Controlled simulations are used for the analysis and evaluation of our model identification methodology. A case study in tactical asset allocation is used to demonstrate how the methodology can be applied to real‐life problems in a way analogous to stepwise forward regression analysis. Neural models are contrasted to multiple linear regression. The results indicate the presence of non‐linear relationships in modelling the equity premium. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

15.
An underlying assumption in Multivariate Singular Spectrum Analysis (MSSA) is that the time series are governed by a linear recurrent continuation. However, in the presence of a structural break, multiple series can be transferred from one homogeneous state to another over a comparatively short time breaking this assumption. As a consequence, forecasting performance can degrade significantly. In this paper, we propose a state-dependent model to incorporate the movement of states in the linear recurrent formula called a State-Dependent Multivariate SSA (SD-MSSA) model. The proposed model is examined for its reliability in the presence of a structural break by conducting an empirical analysis covering both synthetic and real data. Comparison with standard MSSA, BVAR, VAR and VECM models shows the proposed model outperforms all three models significantly.  相似文献   

16.
This paper presents a comparative analysis of the sources of error in forecasts for the UK economy published over a recent four-year period by four independent groups. This analysis rests on the archiving at the ESRC Macroeconomic Modelling Bureau of the original forecasts together with all their accompanying assumptions and adjustments. A method of decomposing observed forecast errors so as to distinguish the contributions of forecaster and model is set out; the impact of future expectations treated in a ‘model-consistent’ or ‘rational’ manner is specifically considered. The results show that the forecaster's adjustments make a substantial contribution to forecast performance, a good part of which comes from adjustments that bring the model on track at the start of the forecast period. The published ex-ante forecasts are usually superior to pure model-based ex-post forecasts, whose performance indicates some misspecification of the underlying models.  相似文献   

17.
In natural kind debates, Boyd's famous Homeostatic Property Cluster theory (HPC) is often misconstrued in two ways: Not only is it thought to make for a normative standard for natural kinds, but also to require the homeostatic mechanisms underlying nomological property clusters to be uniform. My argument for the illegitimacy of both overgeneralizations, both on systematic as well as exegetical grounds, is based on the misconstrued view's failure to account for functional kinds in science. I illustrate the combination of these two misconstruals with recent entries into the natural kind debate about emotions. Finally, I examine and reject Stich's “Kornblith-Devitt method” as a potential justification of these misconstruals.  相似文献   

18.
The physiologist Claude Bernard was an important nineteenth-century methodologist of the life sciences. Here I place his thought in the context of the history of the vera causa standard, arguably the dominant epistemology of science in the eighteenth and early nineteenth centuries. Its proponents held that in order for a cause to be legitimately invoked in a scientific explanation, the cause must be shown by direct evidence to exist and to be competent to produce the effects ascribed to it. Historians of scientific method have argued that in the course of the nineteenth century the vera causa standard was superseded by a more powerful consequentialist epistemology, which also admitted indirect evidence for the existence and competence of causes. The prime example of this is the luminiferous ether, which was widely accepted, in the absence of direct evidence, because it entailed verified observational consequences and, in particular, successful novel predictions. According to the received view, the vera causa standard's demand for direct evidence of existence and competence came to be seen as an impracticable and needless restriction on the scope of legitimate inquiry into the fine structure of nature. The Mill-Whewell debate has been taken to exemplify this shift in scientific epistemology, with Whewell's consequentialism prevailing over Mill's defense of the older standard. However, Bernard's reflections on biological practice challenge the received view. His methodology marked a significant extension of the vera causa standard that made it both powerful and practicable. In particular, Bernard emphasized the importance of detection procedures in establishing the existence of unobservable entities. Moreover, his sophisticated notion of controlled experimentation permitted inferences about competence even in complex biological systems. In the life sciences, the vera causa standard began to flourish precisely around the time of its alleged abandonment.  相似文献   

19.
Social situations, the object of the social sciences, are complex and unique: they contain so many variable aspects that they cannot be reproduced, and it is even difficult to experience two situations that are alike in many respects. The social scientists' past experiences that serve as their background knowledge to intervene in an existent situation is poor compared to what a traditional epistemologist would consider ideal. A way of dealing with the variable and insufficient background of social scientists is by means of models. But, then, how should we characterize social scientific models? This paper examines Otto Neurath's scientific utopianism as an attempt to deal with this problem. Neurath proposes that social scientists work with utopias: broad imaginative plans that coordinate a multitude of features of a social situation. This notion can be used in current debates in philosophy of science because we notice that utopias, in Neurath's sense, are comparable to models and nomological machines in Nancy Cartwright's conception. A model-based view of science lays emphasis on the fact that scientists learn from the repeated operation of such abstract entities, just as they learn from the repetition of experiments in a laboratory. Hence this approach suggests an approximation between the natural and the social sciences, as well as between science and utopian literature. This is exemplified by analyzing the literary dystopia We, written by Yevgeny Zamyatin, to show that reasoning from and debating about utopian writings, even if fictional and pessimistic, creates phenomena of valuation, which are fundamental for constituting a background of experiences in the social sciences.  相似文献   

20.
A unified method to detect and handle innovational and additive outliers, and permanent and transient level changes has been presented by R. S. Tsay. N. S. Balke has found that the presence of level changes may lead to misidentification and loss of test‐power, and suggests augmenting Tsay's procedure by conducting an additional disturbance search based on a white‐noise model. While Tsay allows level changes to be either permanent or transient, Balke considers only the former type. Based on simulated series with transient level changes this paper investigates how Balke's white‐noise model performs both when transient change is omitted from the model specification and when it is included. Our findings indicate that the alleged misidentification of permanent level changes may be influenced by the restrictions imposed by Balke. But when these restrictions are removed, Balke's procedure outperforms Tsay's in detecting changes in the data‐generating process. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号