首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A parsimonious method of exponential smoothing is introduced for time series generated from a combination of local trends and local seasonal effects. It is compared with the additive version of the Holt–Winters method of forecasting on a standard collection of real time series. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
The problem of medium to long‐term sales forecasting raises a number of requirements that must be suitably addressed in the design of the employed forecasting methods. These include long forecasting horizons (up to 52 periods ahead), a high number of quantities to be forecasted, which limits the possibility of human intervention, frequent introduction of new articles (for which no past sales are available for parameter calibration) and withdrawal of running articles. The problem has been tackled by use of a damped‐trend Holt–Winters method as well as feedforward multilayer neural networks (FMNNs) applied to sales data from two German companies. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
The paper is devoted to robust modifications of exponential smoothing for time series with outliers or long-tailed distributions. Classical exponential smoothing applied to such time series is sensitive to the presence of outliers or long-tailed distributions and may give inadequate smoothing and forecasting results. First, simple and double exponential smoothing in the L1 norm (i.e. based on the least absolute deviations) are discussed in detail. Then, general exponential smoothing is made robust, replacing the least squares approach by M-estimation in such a way that the recursive character of the final formulas is preserved. The paper gives simple algorithmic procedures which preserve advantageous features of classical exponential smoothing and, in addition, which are less sensitive to outliers. Robust versions are compared numerically with classical ones.  相似文献   

4.
Three general classes of state space models are presented, using the single source of error formulation. The first class is the standard linear model with homoscedastic errors, the second retains the linear structure but incorporates a dynamic form of heteroscedasticity, and the third allows for non‐linear structure in the observation equation as well as heteroscedasticity. These three classes provide stochastic models for a wide variety of exponential smoothing methods. We use these classes to provide exact analytic (matrix) expressions for forecast error variances that can be used to construct prediction intervals one or multiple steps ahead. These formulas are reduced to non‐matrix expressions for 15 state space models that underlie the most common exponential smoothing methods. We discuss relationships between our expressions and previous suggestions for finding forecast error variances and prediction intervals for exponential smoothing methods. Simpler approximations are developed for the more complex schemes and their validity examined. The paper concludes with a numerical example using a non‐linear model. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

5.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

6.
Forecasting for nonlinear time series is an important topic in time series analysis. Existing numerical algorithms for multi‐step‐ahead forecasting ignore accuracy checking, alternative Monte Carlo methods are also computationally very demanding and their accuracy is difficult to control too. In this paper a numerical forecasting procedure for nonlinear autoregressive time series models is proposed. The forecasting procedure can be used to obtain approximate m‐step‐ahead predictive probability density functions, predictive distribution functions, predictive mean and variance, etc. for a range of nonlinear autoregressive time series models. Examples in the paper show that the forecasting procedure works very well both in terms of the accuracy of the results and in the ability to deal with different nonlinear autoregressive time series models. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

7.
‘Bayesian forecasting’ is a time series method of forecasting which (in the United Kingdom) has become synonymous with the state space formulation of Harrison and Stevens (1976). The approach is distinct from other time series methods in that it envisages changes in model structure. A disjoint class of models is chosen to encompass the changes. Each data point is retrospectively evaluated (using Bayes theorem) to judge which of the models held. Forecasts are then derived conditional on an assumed model holding true. The final forecasts are weighted sums of these conditional forecasts. Few empirical evaluations have been carried out. This paper reports a large scale comparison of time series forecasting methods including the Bayesian. The approach is two fold: a simulation study to examine parameter sensitivity and an empirical study which contrasts Bayesian with other time series methods.  相似文献   

8.
Hierarchical time series arise in various fields such as manufacturing and services when the products or services can be hierarchically structured. “Top-down” and “bottom-up” forecasting approaches are often used for forecasting such hierarchical time series. In this paper, we develop a new hybrid approach (HA) with step-size aggregation for hierarchical time series forecasting. The new approach is a weighted average of the two classical approaches with the weights being optimally chosen for all the series at each level of the hierarchy to minimize the variance of the forecast errors. The independent selection of weights for all the series at each level of the hierarchy makes the HA inconsistent while aggregating suitably across the hierarchy. To address this issue, we introduce a step-size aggregate factor that represents the relationship between forecasts of the two consecutive levels of the hierarchy. The key advantage of the proposed HA is that it captures the structure of the hierarchy inherently due to the combination of the hierarchical approaches instead of independent forecasts of all the series at each level of the hierarchy. We demonstrate the performance of the new approach by applying it to the monthly data of ‘Industrial’ category of M3-Competition as well as on Pakistan energy consumption data.  相似文献   

9.
Value‐at‐risk (VaR) forecasting generally relies on a parametric density function of portfolio returns that ignores higher moments or assumes them constant. In this paper, we propose a simple approach to forecasting of a portfolio VaR. We employ the Gram‐Charlier expansion (GCE) augmenting the standard normal distribution with the first four moments, which are allowed to vary over time. In an extensive empirical study, we compare the GCE approach to other models of VaR forecasting and conclude that it provides accurate and robust estimates of the realized VaR. In spite of its simplicity, on our dataset GCE outperforms other estimates that are generated by both constant and time‐varying higher‐moments models. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
The versatility of the one‐dimensional discrete wavelet analysis combined with wavelet and Burg extensions for forecasting financial times series with distinctive properties is illustrated with market data. Any time series of financial assets may be decomposed into simpler signals called approximations and details in the framework of the one‐dimensional discrete wavelet analysis. The simplified signals are recomposed after extension. The final output is the forecasted time series which is compared to observed data. Results show the pertinence of adding spectrum analysis to the battery of tools used by econometricians and quantitative analysts for the forecast of economic or financial time series.  相似文献   

11.
This article studies Man and Tiao's (2006) low‐order autoregressive fractionally integrated moving‐average (ARFIMA) approximation to Tsai and Chan's (2005b) limiting aggregate structure of the long‐memory process. In matching the autocorrelations, we demonstrate that the approximation works well, especially for larger d values. In computing autocorrelations over long lags for larger d value, using the exact formula one might encounter numerical problems. The use of the ARFIMA(0, d, d?1) model provides a useful alternative to compute the autocorrelations as a really close approximation. In forecasting future aggregates, we demonstrate the close performance of using the ARFIMA(0, d, d?1) model and the exact aggregate structure. In practice, this provides a justification for the use of a low‐order ARFIMA model in predicting future aggregates of long‐memory process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
We introduce a long‐memory autoregressive conditional Poisson (LMACP) model to model highly persistent time series of counts. The model is applied to forecast quoted bid–ask spreads, a key parameter in stock trading operations. It is shown that the LMACP nicely captures salient features of bid–ask spreads like the strong autocorrelation and discreteness of observations. We discuss theoretical properties of LMACP models and evaluate rolling‐window forecasts of quoted bid–ask spreads for stocks traded at NYSE and NASDAQ. We show that Poisson time series models significantly outperform forecasts from AR, ARMA, ARFIMA, ACD and FIACD models. The economic significance of our results is supported by the evaluation of a trade schedule. Scheduling trades according to spread forecasts we realize cost savings of up to 14 % of spread transaction costs. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper we present an intelligent decision‐support system based on neural network technology for model selection and forecasting. While most of the literature on the application of neural networks in forecasting addresses the use of neural network technology as an alternative forecasting tool, limited research has focused on its use for selection of forecasting methods based on time‐series characteristics. In this research, a neural network‐based decision support system is presented as a method for forecast model selection. The neural network approach provides a framework for directly incorporating time‐series characteristics into the model‐selection phase. Using a neural network, a forecasting group is initially selected for a given data set, based on a set of time‐series characteristics. Then, using an additional neural network, a specific forecasting method is selected from a pool of three candidate methods. The results of training and testing of the networks are presented along with conclusions. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

14.
This study establishes a benchmark for short‐term salmon price forecasting. The weekly spot price of Norwegian farmed Atlantic salmon is predicted 1–5 weeks ahead using data from 2007 to 2014. Sixteen alternative forecasting methods are considered, ranging from classical time series models to customized machine learning techniques to salmon futures prices. The best predictions are delivered by k‐nearest neighbors method for 1 week ahead; vector error correction model estimated using elastic net regularization for 2 and 3 weeks ahead; and futures prices for 4 and 5 weeks ahead. While the nominal gains in forecast accuracy over a naïve benchmark are small, the economic value of the forecasts is considerable. Using a simple trading strategy for timing the sales based on price forecasts could increase the net profit of a salmon farmer by around 7%.  相似文献   

15.
This paper presents short‐term forecasting methods applied to electricity consumption in Brazil. The focus is on comparing the results obtained after using two distinct approaches: dynamic non‐linear models and econometric models. The first method, that we propose, is based on structural statistical models for multiple time series analysis and forecasting. It involves non‐observable components of locally linear trends for each individual series and a shared multiplicative seasonal component described by dynamic harmonics. The second method, adopted by the electricity power utilities in Brazil, consists of extrapolation of the past data and is based on statistical relations of simple or multiple regression type. To illustrate the proposed methodology, a numerical application is considered with real data. The data represents the monthly industrial electricity consumption in Brazil from the three main power utilities: Eletropaulo, Cemig and Light, situated at the major energy‐consuming states, Sao Paulo, Rio de Janeiro and Minas Gerais, respectively, in the Brazilian Southeast region. The chosen time period, January 1990 to September 1994, corresponds to an economically unstable period just before the beginning of the Brazilian Privatization Program. Implementation of the algorithms considered in this work was made via the statistical software S‐PLUS. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

16.
This paper proposes and implements a new methodology for forecasting time series, based on bicorrelations and cross‐bicorrelations. It is shown that the forecasting technique arises as a natural extension of, and as a complement to, existing univariate and multivariate non‐linearity tests. The formulations are essentially modified autoregressive or vector autoregressive models respectively, which can be estimated using ordinary least squares. The techniques are applied to a set of high‐frequency exchange rate returns, and their out‐of‐sample forecasting performance is compared to that of other time series models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

17.
There is growing interest in exploring potential forecast gains from the nonlinear structure of multivariate threshold autoregressive (MTAR) models. A least squares‐based statistical test has been proposed in the literature. However, previous studies on univariate time series analysis show that classical nonlinearity tests are often not robust to additive outliers. The outlier problem is expected to pose similar difficulties for multivariate nonlinearity tests. In this paper, we propose a new and robust MTAR‐type nonlinearity test, and derive the asymptotic null distribution of the test statistic. A Monte Carlo experiment is carried out to compare the power of the proposed test with that of the least squares‐based test under the influence of additive time series outliers. The results indicate that the proposed method is preferable to the classical test when observations are contaminated by outliers. Finally, we provide illustrative examples by applying the statistical tests to two real datasets. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

18.
In this paper we present results of a simulation study to assess and compare the accuracy of forecasting techniques for long‐memory processes in small sample sizes. We analyse differences between adaptive ARMA(1,1) L‐step forecasts, where the parameters are estimated by minimizing the sum of squares of L‐step forecast errors, and forecasts obtained by using long‐memory models. We compare widths of the forecast intervals for both methods, and discuss some computational issues associated with the ARMA(1,1) method. Our results illustrate the importance and usefulness of long‐memory models for multi‐step forecasting. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

19.
The purpose of this paper is to apply the Box–Jenkins methodology to ARIMA models and determine the reasons why in empirical tests it is found that the post-sample forecasting the accuracy of such models is generally worse than much simpler time series methods. The paper concludes that the major problem is the way of making the series stationary in its mean (i.e. the method of differencing) that has been proposed by Box and Jenkins. If alternative approaches are utilized to remove and extrapolate the trend in the data, ARMA models outperform the models selected through Box–Jenkins methodology. In addition, it is shown that using ARMA models to seasonally adjusted data slightly improves post-sample accuracies while simplifying the use of ARMA models. It is also confirmed that transformations slightly improve post-sample forecasting accuracy, particularly for long forecasting horizons. Finally, it is demonstrated that AR(1), AR(2) and ARMA(1,1) models can produce more accurate post-sample forecasts than those found through the application of Box–Jenkins methodology.© 1997 John Wiley & Sons, Ltd.  相似文献   

20.
Mortality forecasting is important for life insurance policies, as well as in other areas. Current techniques for forecasting mortality in the USA involve the use of the Lee–Carter model, which is primarily used without regard to cause. A method for forecasting morality is proposed which involves the use of neural networks. A comparative analysis is done between the Lee–Carter model, linear trend and the proposed method. The results confirm that the use of neural networks performs better than the Lee–Carter and linear trend model within 5% error. Furthermore, mortality rates and life expectancy were formulated for individuals with a specific cause based on prevalence data. The rates are broken down further into respective stages (cancer) based on the individual's diagnosis. Therefore, this approach allows life expectancy to be calculated based on an individual's state of health. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号