首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper is a critical review of exponential smoothing since the original work by Brown and Holt in the 1950s. Exponential smoothing is based on a pragmatic approach to forecasting which is shared in this review. The aim is to develop state-of-the-art guidelines for application of the exponential smoothing methodology. The first part of the paper discusses the class of relatively simple models which rely on the Holt-Winters procedure for seasonal adjustment of the data. Next, we review general exponential smoothing (GES), which uses Fourier functions of time to model seasonality. The research is reviewed according to the following questions. What are the useful properties of these models? What parameters should be used? How should the models be initialized? After the review of model-building, we turn to problems in the maintenance of forecasting systems based on exponential smoothing. Topics in the maintenance area include the use of quality control models to detect bias in the forecast errors, adaptive parameters to improve the response to structural changes in the time series, and two-stage forecasting, whereby we use a model of the errors or some other model of the data to improve our initial forecasts. Some of the major conclusions: the parameter ranges and starting values typically used in practice are arbitrary and may detract from accuracy. The empirical evidence favours Holt's model for trends over that of Brown. A linear trend should be damped at long horizons. The empirical evidence favours the Holt-Winters approach to seasonal data over GES. It is difficult to justify GES in standard form–the equivalent ARIMA model is simpler and more efficient. The cumulative sum of the errors appears to be the most practical forecast monitoring device. There is no evidence that adaptive parameters improve forecast accuracy. In fact, the reverse may be true.  相似文献   

2.
A large number of statistical forecasting procedures for univariate time series have been proposed in the literature. These range from simple methods, such as the exponentially weighted moving average, to more complex procedures such as Box–Jenkins ARIMA modelling and Harrison–Stevens Bayesian forecasting. This paper sets out to show the relationship between these various procedures by adopting a framework in which a time series model is viewed in terms of trend, seasonal and irregular components. The framework is then extended to cover models with explanatory variables. From the technical point of view the Kalman filter plays an important role in allowing an integrated treatment of these topics.  相似文献   

3.
The parsimonious method of exponentially weighted regression (EWR) is attractive but limited in application because it depends upon just one discount factor. This paper generalizes the EWR approach to a method called discount weighted estimation (DWE) which allowed distinct model components to have different associated discount factors. The method includes EWR as a special case. The general non-limiting recurrence relationships will be useful in practice, especially when practitioners wish to specify prior information, to intervene with subjective judgement and to derive estimates and forecasts sequentially based upon limited data. Two theorems extend the important EWR limiting results of Dobbie and McKenzie to DWE. The latter permits the derivation of a large class of known processs for which DWE is optimal. The method is illustrated by two applications, one of which uses the famous international airline passenger data. This allows a comparision with the ICI MULDO system which uses a particular two discount factor forecasting method. A companion paper extends the discount methods to Bayesian forecasting, Kalman filtering and state space modelling.  相似文献   

4.
Bayesian methods for assessing the accuracy of dynamic financial value‐at‐risk (VaR) forecasts have not been considered in the literature. Such methods are proposed in this paper. Specifically, Bayes factor analogues of popular frequentist tests for independence of violations from, and for correct coverage of a time series of, dynamic quantile forecasts are developed. To evaluate the relevant marginal likelihoods, analytic integration methods are utilized when possible; otherwise multivariate adaptive quadrature methods are employed to estimate the required quantities. The usual Bayesian interval estimate for a proportion is also examined in this context. The size and power properties of the proposed methods are examined via a simulation study, illustrating favourable comparisons both overall and with their frequentist counterparts. An empirical study employs the proposed methods, in comparison with standard tests, to assess the adequacy of a range of forecasting models for VaR in several financial market data series. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
This paper investigates inference and volatility forecasting using a Markov switching heteroscedastic model with a fat‐tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. The motivation for extending the Markov switching GARCH model, previously developed to capture mean asymmetry, is that the switching variable, assumed to be a first‐order Markov process, is unobserved. The proposed model extends this work to incorporate Markov switching in the mean and variance simultaneously. Parameter estimation and inference are performed in a Bayesian framework via a Markov chain Monte Carlo scheme. We compare competing models using Bayesian forecasting in a comparative value‐at‐risk study. The proposed methods are illustrated using both simulations and eight international stock market return series. The results generally favor the proposed double Markov switching GARCH model with an exogenous variable. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

8.
Adaptive exponential smoothing methods allow a smoothing parameter to change over time, in order to adapt to changes in the characteristics of the time series. However, these methods have tended to produce unstable forecasts and have performed poorly in empirical studies. This paper presents a new adaptive method, which enables a smoothing parameter to be modelled as a logistic function of a user‐specified variable. The approach is analogous to that used to model the time‐varying parameter in smooth transition models. Using simulated data, we show that the new approach has the potential to outperform existing adaptive methods and constant parameter methods when the estimation and evaluation samples both contain a level shift or both contain an outlier. An empirical study, using the monthly time series from the M3‐Competition, gave encouraging results for the new approach. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

9.
Many publications on tourism forecasting have appeared during the past twenty years. The purpose of this article is to organize and summarize that scattered literature. General conclusions are also drawn from the studies to help those wishing to develop tourism forecasts of their own. The forecasting techniques discussed include time series models, econometric causal models, the gravity model and expert-opinion techniques. The major conclusions are that time series models are the simplest and least costly (and therefore most appropriate for practitioners); the gravity model is best suited to handle international tourism flows (and will be most useful to governments and tourism agencies); and expert-opinion methods are useful when data are unavailable. Further research is needed on the use of economic indicators in tourism forecasting, on the development of attractivity and emissiveness indexes for use in gravity and econometric models and on empirical comparisons among the different methods.  相似文献   

10.
The increase in oil price volatility in recent years has raised the importance of forecasting it accurately for valuing and hedging investments. The paper models and forecasts the crude oil exchange‐traded funds (ETF) volatility index, which has been used in the last years as an important alternative measure to track and analyze the volatility of future oil prices. Analysis of the oil volatility index suggests that it presents features similar to those of the daily market volatility index, such as long memory, which is modeled using well‐known heterogeneous autoregressive (HAR) specifications and new extensions that are based on net and scaled measures of oil price changes. The aim is to improve the forecasting performance of the traditional HAR models by including predictors that capture the impact of oil price changes on the economy. The performance of the new proposals and benchmarks is evaluated with the model confidence set (MCS) and the Generalized‐AutoContouR (G‐ACR) tests in terms of point forecasts and density forecasting, respectively. We find that including the leverage in the conditional mean or variance of the basic HAR model increases its predictive ability. Furthermore, when considering density forecasting, the best models are a conditional heteroskedastic HAR model that includes a scaled measure of oil price changes, and a HAR model with errors following an exponential generalized autoregressive conditional heteroskedasticity specification. In both cases, we consider a flexible distribution for the errors of the conditional heteroskedastic process.  相似文献   

11.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

12.
We propose a nonlinear time series model where both the conditional mean and the conditional variance are asymmetric functions of past information. The model is particularly useful for analysing financial time series where it has been noted that there is an asymmetric impact of good news and bad news on volatility (risk) transmission. We introduce a coherent framework for testing asymmetries in the conditional mean and the conditional variance, separately or jointly. To this end we derive both a Wald and a Lagrange multiplier test. Some of the new asymmetric model's moment properties are investigated. Detailed empirical results are given for the daily returns of the composite index of the New York Stock Exchange. There is strong evidence of asymmetry in both the conditional mean and the conditional variance functions. In a genuine out‐of‐sample forecasting experiment the performance of the best fitted asymmetric model, having asymmetries in both conditional mean and conditional variance, is compared with an asymmetric model for the conditional mean, and with no‐change forecasts. This is done both in terms of conditional mean forecasting as well as in terms of risk forecasting. Finally, the paper presents some evidence of asymmetries in the index stock returns of the Group of Seven (G7) industrialized countries. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper we aim to improve existing empirical exchange rate models by accounting for uncertainty with respect to the underlying structural representation. Within a flexible Bayesian framework, our modeling approach assumes that different regimes are characterized by commonly used structural exchange rate models, with transitions across regimes being driven by a Markov process. We assume a time-varying transition probability matrix with transition probabilities depending on a measure of the monetary policy stance of the central bank at home and in the USA. We apply this model to a set of eight exchange rates against the US dollar. In a forecasting exercise, we show that model evidence varies over time, and a model approach that takes this empirical evidence seriously yields more accurate density forecasts for most currency pairs considered.  相似文献   

14.
Methods of time series forecasting are proposed which can be applied automatically. However, they are not rote formulae, since they are based on a flexible philosophy which can provide several models for consideration. In addition it provides diverse diagnostics for qualitatively and quantitatively estimating how well one can forecast a series. The models considered are called ARARMA models (or ARAR models) because the model fitted to a long memory time series (t) is based on sophisticated time series analysis of AR (or ARMA) schemes (short memory models) fitted to residuals Y(t) obtained by parsimonious‘best lag’non-stationary autoregression. Both long range and short range forecasts are provided by an ARARMA model Section 1 explains the philosophy of our approach to time series model identification. Sections 2 and 3 attempt to relate our approach to some standard approaches to forecasting; exponential smoothing methods are developed from the point of view of prediction theory (section 2) and extended (section 3). ARARMA models are introduced (section 4). Methods of ARARMA model fitting are outlined (sections 5,6). Since‘the proof of the pudding is in the eating’, the methods proposed are illustrated (section 7) using the classic example of international airline passengers.  相似文献   

15.
We consider the problem of forecasting a stationary time series when there is an unknown mean break close to the forecast origin. Based on the intercept‐correction methods suggested by Clements and Hendry (1998) and Bewley (2003), a hybrid approach is introduced, where the break and break point are treated in a Bayesian fashion. The hyperparameters of the priors are determined by maximizing the marginal density of the data. The distributions of the proposed forecasts are derived. Different intercept‐correction methods are compared using simulation experiments. Our hybrid approach compares favorably with both the uncorrected and the intercept‐corrected forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
The specification choices of vector autoregressions (VARs) in forecasting are often not straightforward, as they are complicated by various factors. To deal with model uncertainty and better utilize multiple VARs, this paper adopts the dynamic model averaging/selection (DMA/DMS) algorithm, in which forecasting models are updated and switch over time in a Bayesian manner. In an empirical application to a pool of Bayesian VAR (BVAR) models whose specifications include level and difference, along with differing lag lengths, we demonstrate that specification‐switching VARs are flexible and powerful forecast tools that yield good performance. In particular, they beat the overall best BVAR in most cases and are comparable to or better than the individual best models (for each combination of variable, forecast horizon, and evaluation metrics) for medium‐ and long‐horizon forecasts. We also examine several extensions in which forecast model pools consist of additional individual models in partial differences as well as all level/difference models, and/or time variations in VAR innovations are allowed, and discuss the potential advantages and disadvantages of such specification choices. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
Forecast combination based on a model selection approach is discussed and evaluated. In addition, a combination approach based on ex ante predictive ability is outlined. The model selection approach which we examine is based on the use of Schwarz (SIC) or the Akaike (AIC) Information Criteria. Monte Carlo experiments based on combination forecasts constructed using possibly (misspecified) models suggest that the SIC offers a potentially useful combination approach, and that further investigation is warranted. For example, combination forecasts from a simple averaging approach MSE‐dominate SIC combination forecasts less than 25% of the time in most cases, while other ‘standard’ combination approaches fare even worse. Alternative combination approaches are also compared by conducting forecasting experiments using nine US macroeconomic variables. In particular, artificial neural networks (ANN), linear models, and professional forecasts are used to form real‐time forecasts of the variables, and it is shown via a series of experiments that SIC, t‐statistic, and averaging combination approaches dominate various other combination approaches. An additional finding is that while ANN models may not MSE‐dominate simpler linear models, combinations of forecasts from these two models outperform either individual forecast, for a subset of the economic variables examined. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

18.
The purpose of this paper is to apply the Box–Jenkins methodology to ARIMA models and determine the reasons why in empirical tests it is found that the post-sample forecasting the accuracy of such models is generally worse than much simpler time series methods. The paper concludes that the major problem is the way of making the series stationary in its mean (i.e. the method of differencing) that has been proposed by Box and Jenkins. If alternative approaches are utilized to remove and extrapolate the trend in the data, ARMA models outperform the models selected through Box–Jenkins methodology. In addition, it is shown that using ARMA models to seasonally adjusted data slightly improves post-sample accuracies while simplifying the use of ARMA models. It is also confirmed that transformations slightly improve post-sample forecasting accuracy, particularly for long forecasting horizons. Finally, it is demonstrated that AR(1), AR(2) and ARMA(1,1) models can produce more accurate post-sample forecasts than those found through the application of Box–Jenkins methodology.© 1997 John Wiley & Sons, Ltd.  相似文献   

19.
Artificial neural network modelling has recently attracted much attention as a new technique for estimation and forecasting in economics and finance. The chief advantages of this new approach are that such models can usually find a solution for very complex problems, and that they are free from the assumption of linearity that is often adopted to make the traditional methods tractable. In this paper we compare the performance of Back‐Propagation Artificial Neural Network (BPN) models with the traditional econometric approaches to forecasting the inflation rate. Of the traditional econometric models we use a structural reduced‐form model, an ARIMA model, a vector autoregressive model, and a Bayesian vector autoregression model. We compare each econometric model with a hybrid BPN model which uses the same set of variables. Dynamic forecasts are compared for three different horizons: one, three and twelve months ahead. Root mean squared errors and mean absolute errors are used to compare quality of forecasts. The results show the hybrid BPN models are able to forecast as well as all the traditional econometric methods, and to outperform them in some cases. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper, we apply Bayesian inference to model and forecast intraday trading volume, using autoregressive conditional volume (ACV) models, and we evaluate the quality of volume point forecasts. In the empirical application, we focus on the analysis of both in‐ and out‐of‐sample performance of Bayesian ACV models estimated for 2‐minute trading volume data for stocks quoted on the Warsaw Stock Exchange in Poland. We calculate two types of point forecasts, using either expected values or medians of predictive distributions. We conclude that, in general, all considered models generate significantly biased forecasts. We also observe that the considered models significantly outperform such benchmarks as the naïve or rolling means forecasts. Moreover, in terms of root mean squared forecast errors, point predictions obtained within the ACV model with exponential distribution emerge superior compared to those calculated in structures with more general innovation distributions, although in many cases this characteristic turns out to be statistically insignificant. On the other hand, when comparing mean absolute forecast errors, the median forecasts obtained within the ACV models with Burr and generalized gamma distribution are found to be statistically better than other forecasts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号