首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Time-series data are often contaminated with outliers due to the influence of unusual and non-repetitive events. Forecast accuracy in such situations is reduced due to (1) a carry-over effect of the outlier on the point forecast and (2) a bias in the estimates of model parameters. Hillmer (1984) and Ledolter (1989) studied the effect of additive outliers on forecasts. It was found that forecast intervals are quite sensitive to additive outliers, but that point forecasts are largely unaffected unless the outlier occurs near the forecast origin. In such a situation the carry-over effect of the outlier can be quite substantial. In this study, we investigate the issues of forecasting when outliers occur near or at the forecast origin. We propose a strategy which first estimates the model parameters and outlier effects using the procedure of Chen and Liu (1993) to reduce the bias in the parameter estimates, and then uses a lower critical value to detect outliers near the forecast origin in the forecasting stage. One aspect of this study is on the carry-over effects of outliers on forecasts. Four types of outliers are considered: innovational outlier, additive outlier, temporary change, and level shift. The effects due to a misidentification of an outlier type are examined. The performance of the outlier detection procedure is studied for cases where outliers are near the end of the series. In such cases, we demonstrate that statistical procedures may not be able to effectively determine the outlier types due to insufficient information. Some strategies are recommended to reduce potential difficulties caused by incorrectly detected outlier types. These findings may serve as a justification for forecasting in conjunction with judgment. Two real examples are employed to illustrate the issues discussed.  相似文献   

2.
It is investigated whether euro area variables can be forecast better based on synthetic time series for the pre‐euro period or by using just data from Germany for the pre‐euro period. Our forecast comparison is based on quarterly data for the period 1970Q1–2003Q4 for 10 macroeconomic variables. The years 2000–2003 are used as forecasting period. A range of different univariate forecasting methods is applied. Some of them are based on linear autoregressive models and we also use some nonlinear or time‐varying coefficient models. It turns out that most variables which have a similar level for Germany and the euro area such as prices can be better predicted based on German data, while aggregated European data are preferable for forecasting variables which need considerable adjustments in their levels when joining German and European Monetary Union (EMU) data. These results suggest that for variables which have a similar level for Germany and the euro area it may be reasonable to consider the German pre‐EMU data for studying economic problems in the euro area. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
Forecasts are routinely revised, and these revisions are often the subject of informal analysis and discussion. This paper argues (1) that forecast revisions are analyzed because they help forecasters and forecast users to evaluate forecasts and forecasting procedures and (2) that these analyses can be sharpened by using the forecasting model to systematically express its forecast revision as the sum of components identified with specific subsets of new information, such as data revisions and forecast errors. An algorithm for this purpose is explained and illustrated.  相似文献   

4.
The purpose of this paper is to suggest that the maximum (or minimum) of a number of primary forecasts may make a valuable addition to the forecasting accuracy of a combination of forecasts. Such forecasts are readily computable. Theoretical results are presented for two unbiased forecasts with correlated normally distributed errors, showing that the maximum (minimum) of two forecasts can have a smaller error variance than either of the primary forecasts and the forecast error can have low correlation with the primary errors. Empirical results are obtained for two different sets of forecasts available in the literature, and it is observed that a combination forecast including the maximum and/or minimum has attractive forecasting properties.  相似文献   

5.
The availability of numerous modeling approaches for volatility forecasting leads to model uncertainty for both researchers and practitioners. A large number of studies provide evidence in favor of combination methods for forecasting a variety of financial variables, but most of them are implemented on returns forecasting and evaluate their performance based solely on statistical evaluation criteria. In this paper, we combine various volatility forecasts based on different combination schemes and evaluate their performance in forecasting the volatility of the S&P 500 index. We use an exhaustive variety of combination methods to forecast volatility, ranging from simple techniques to time-varying techniques based on the past performance of the single models and regression techniques. We then evaluate the forecasting performance of single and combination volatility forecasts based on both statistical and economic loss functions. The empirical analysis in this paper yields an important conclusion. Although combination forecasts based on more complex methods perform better than the simple combinations and single models, there is no dominant combination technique that outperforms the rest in both statistical and economic terms.  相似文献   

6.
This paper finds the yield curve to have a well-performing ability to forecast the real gross domestic product growth in the USA, compared to professional forecasters and time series models. Past studies have different arguments concerning growth lags, structural breaks, and ultimately the ability of the yield curve to forecast economic growth. This paper finds such results to be dependent on the estimation and forecasting techniques employed. By allowing various interest rates to act as explanatory variables and various window sizes for the out-of-sample forecasts, significant forecasts from many window sizes can be found. These seemingly good forecasts may face issues, including persistent forecasting errors. However, by using statistical learning algorithms, such issues can be cured to some extent. The overall result suggests, by scientifically deciding the window sizes, interest rate data, and learning algorithms, many outperforming forecasts can be produced for all lags from one quarter to 3 years, although some may be worse than the others due to the irreducible noise of the data.  相似文献   

7.
The judgmental modification of quantitative forecasts has become increasingly adopted in the production of agricultural commodity outlook information. Such modifications allow current period information to be incorporated into the forecast value, and ensure that the forecast is realistic in the context of current industry trends. This paper investigates the potential value of this approach in production forecasting in the Australian lamb industry. Several individual and composite econometric models were used to forecast a lamb-slaughtering series with a selected forecast being given to a panel of lamb industry specialists for consideration and modification. The results demonstrate that this approach offers considerable accuracy advantages in the short-term forecasting of livestock market variables, such as slaughtering, whose values can be strongly influenced by current industry conditions.  相似文献   

8.
While much research related to forecasting return volatility does so in a univariate setting, this paper includes proxies for information flows to forecast intra‐day volatility for the IBEX 35 futures market. The belief is that volume or the number of transactions conveys important information about the market that may be useful in forecasting. Our results suggest that augmenting a variety of GARCH‐type models with these proxies lead to improved forecasts across a range of intra‐day frequencies. Furthermore, our results present an interesting picture whereby the PARCH model generally performs well at the highest frequencies and shorter forecasting horizons, whereas the component model performs well at lower frequencies and longer forecast horizons. Both models attempt to capture long memory; the PARCH model allows for exponential decay in the autocorrelation function, while the component model captures trend volatility, which dominates over a longer horizon. These characteristics are likely to explain the success of each model. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper an investigation is made of the properties and use of two aggregate measures of forecast bias and accuracy. These are metrics used in business to calculate aggregate forecasting performance for a family (group) of products. We find that the aggregate measures are not particularly informative if some of the one‐step‐ahead forecasts are biased. This is likely to be the case in practice if frequently employed forecasting methods are used to generate a large number of individual forecasts. In the paper, examples are constructed to illustrate some potential problems in the use of the metrics. We propose a simple graphical display of forecast bias and accuracy to supplement the information yielded by the accuracy measures. This support includes relevant boxplots of measures of individual forecasting success. This tool is simple but helpful as the graphic display has the potential to indicate forecast deterioration that can be masked by one or both of the aggregate metrics. The procedures are illustrated with data representing sales of food items. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
Although both direct multi‐step‐ahead forecasting and iterated one‐step‐ahead forecasting are two popular methods for predicting future values of a time series, it is not clear that the direct method is superior in practice, even though from a theoretical perspective it has lower mean squared error (MSE). A given model can be fitted according to either a multi‐step or a one‐step forecast error criterion, and we show here that discrepancies in performance between direct and iterative forecasting arise chiefly from the method of fitting, and is dictated by the nuances of the model's misspecification. We derive new formulas for quantifying iterative forecast MSE, and present a new approach for assessing asymptotic forecast MSE. Finally, the direct and iterative methods are compared on a retail series, which illustrates the strengths and weaknesses of each approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
Robust versions of the exponential and Holt–Winters smoothing method for forecasting are presented. They are suitable for forecasting univariate time series in the presence of outliers. The robust exponential and Holt–Winters smoothing methods are presented as recursive updating schemes that apply the standard technique to pre‐cleaned data. Both the update equation and the selection of the smoothing parameters are robustified. A simulation study compares the robust and classical forecasts. The presented method is found to have good forecast performance for time series with and without outliers, as well as for fat‐tailed time series and under model misspecification. The method is illustrated using real data incorporating trend and seasonal effects. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

12.
The TFT‐LCD (thin‐film transistor–liquid crystal display) industry is one of the key global industries with products that have high clock speed. In this research, the LCD monitor market is considered for an empirical study on hierarchical forecasting (HF). The proposed HF methodology consists of five steps. First, the three hierarchical levels of the LCD monitor market are identified. Second, several exogenously driven factors that significantly affect the demand for LCD monitors are identified at each level of product hierarchy. Third, the three forecasting techniques—regression analysis, transfer function, and simultaneous equations model—are combined to forecast future demand at each hierarchical level. Fourth, various forecasting approaches and disaggregating proportion methods are adopted to obtain consistent demand forecasts at each hierarchical level. Finally, the forecast errors with different forecasting approaches are assessed in order to determine the best forecasting level and the best forecasting approach. The findings show that the best forecast results can be obtained by using the middle‐out forecasting approach. These results could guide LCD manufacturers and brand owners on ways to forecast future market demands. Copyright 2008 John Wiley & Sons, Ltd.  相似文献   

13.
The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that exploit volatility persistence to emphasise certain losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta.  相似文献   

14.
This paper is a critical review of exponential smoothing since the original work by Brown and Holt in the 1950s. Exponential smoothing is based on a pragmatic approach to forecasting which is shared in this review. The aim is to develop state-of-the-art guidelines for application of the exponential smoothing methodology. The first part of the paper discusses the class of relatively simple models which rely on the Holt-Winters procedure for seasonal adjustment of the data. Next, we review general exponential smoothing (GES), which uses Fourier functions of time to model seasonality. The research is reviewed according to the following questions. What are the useful properties of these models? What parameters should be used? How should the models be initialized? After the review of model-building, we turn to problems in the maintenance of forecasting systems based on exponential smoothing. Topics in the maintenance area include the use of quality control models to detect bias in the forecast errors, adaptive parameters to improve the response to structural changes in the time series, and two-stage forecasting, whereby we use a model of the errors or some other model of the data to improve our initial forecasts. Some of the major conclusions: the parameter ranges and starting values typically used in practice are arbitrary and may detract from accuracy. The empirical evidence favours Holt's model for trends over that of Brown. A linear trend should be damped at long horizons. The empirical evidence favours the Holt-Winters approach to seasonal data over GES. It is difficult to justify GES in standard form–the equivalent ARIMA model is simpler and more efficient. The cumulative sum of the errors appears to be the most practical forecast monitoring device. There is no evidence that adaptive parameters improve forecast accuracy. In fact, the reverse may be true.  相似文献   

15.
When quantitative models are used for short-term multi-item sales forecasts it is possible that the managers who use such forecasts may disagree with at least some of the estimates obtained, and wish to change them so that they become more consistent with their own (subjective) evaluation of the marketplace. This study reports on an analysis of the effectiveness of judgemental revision of sales forecasts over six quarterly forecasting periods. The results give general support for the practice of forecast manipulation as a means of improving forecasting accuracy. It is also observed that the effectiveness of revision activity varies across different time periods.  相似文献   

16.
This paper addresses the issue of forecasting individual items within a product line; where each line includes several independent but closely related products. The purpose of the research was to reduce the overall forecasting burden by developing and assessing schemes of disaggregating forecasts of a total product line to the related individual items. Measures were developed to determine appropriate disaggregated methodologies and to compare the forecast accuracy of individual product forecasts versus disaggregated totals. Several of the procedures used were based upon extensions of the combination of forecast research and applied to disaggregations of total forecasts of product lines. The objective was to identify situations when it was advantageous to produce disaggregated forecasts, and if advantageous, which method of disaggregation to utilize. This involved identification of the general conceptual characteristics within a set of product line data that might cause a disaggregation method to produce relatively accurate forecasts. These conceptual characteristics provided guidelines for forecasters on how to select a disaggregation method and under what conditions a particular method is applicable.  相似文献   

17.
This paper presents expressions for the variance of the forecast error for arbitrary lead times for both the additive and multiplicative Holt-Winters seasonal forecasting models. It is shown that even when the smoothing constants are chosen to have values between zero and one, when the period is greater than four, the variance may not be finite for some values of the smoothing constants. In addition, the regions where the variance becomes infinite are almost the same for both models. These results are of importance for practitioners, who may choose values for the smoothing constants arbitrarily, or by searching on the unit cube for values which minimize the sum of the squared errors when fitting the model to a data set. It is also shown that the variance of the forecast error for the multiplicative model is nonstationary and periodic.  相似文献   

18.
The specification choices of vector autoregressions (VARs) in forecasting are often not straightforward, as they are complicated by various factors. To deal with model uncertainty and better utilize multiple VARs, this paper adopts the dynamic model averaging/selection (DMA/DMS) algorithm, in which forecasting models are updated and switch over time in a Bayesian manner. In an empirical application to a pool of Bayesian VAR (BVAR) models whose specifications include level and difference, along with differing lag lengths, we demonstrate that specification‐switching VARs are flexible and powerful forecast tools that yield good performance. In particular, they beat the overall best BVAR in most cases and are comparable to or better than the individual best models (for each combination of variable, forecast horizon, and evaluation metrics) for medium‐ and long‐horizon forecasts. We also examine several extensions in which forecast model pools consist of additional individual models in partial differences as well as all level/difference models, and/or time variations in VAR innovations are allowed, and discuss the potential advantages and disadvantages of such specification choices. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
Forecast regions are a common way to summarize forecast accuracy. They usually consist of an interval symmetric about the forecast mean. However, symmetric intervals may not be appropriate forecast regions when the forecast density is not symmetric and unimodal. With many modern time series models, such as those which are non-linear or have non-normal errors, the forecast densities are often asymmetric or multimodal. The problem of obtaining forecast regions in such cases is considered and it is proposed that highest-density forecast regions be used. A graphical method for presenting the results is discussed.  相似文献   

20.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号