首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Artificial neural network (ANN) combined with signal decomposing methods is effective for long‐term streamflow time series forecasting. ANN is a kind of machine learning method utilized widely for streamflow time series, and which performs well in forecasting nonstationary time series without the need of physical analysis for complex and dynamic hydrological processes. Most studies take multiple factors determining the streamflow as inputs such as rainfall. In this study, a long‐term streamflow forecasting model depending only on the historical streamflow data is proposed. Various preprocessing techniques, including empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD) and discrete wavelet transform (DWT), are first used to decompose the streamflow time series into simple components with different timescale characteristics, and the relation between these components and the original streamflow at the next time step is analyzed by ANN. Hybrid models EMD‐ANN, EEMD‐ANN and DWT‐ANN are developed in this study for long‐term daily streamflow forecasting, and performance measures root mean square error (RMSE), mean absolute percentage error (MAPE) and Nash–Sutcliffe efficiency (NSE) indicate that the proposed EEMD‐ANN method performs better than EMD‐ANN and DWT‐ANN models, especially in high flow forecasting.  相似文献   

2.
Singular spectrum analysis (SSA) is a powerful nonparametric method in the area of time series analysis that has shown its capability in different applications areas. SSA depends on two main choices: the window length L and the number of eigentriples used for grouping r. One of the most important issues when analyzing time series is the forecast of new observations. When using SSA for time series forecasting there are several alternative algorithms, the most widely used being the recurrent forecasting model, which assumes that a given observation can be written as a linear combination of the L?1 previous observations. However, when the window length L is large, the forecasting model is unlikely to be parsimonious. In this paper we propose a new parsimonious recurrent forecasting model that uses an optimal m(<L?1) coefficients in the linear combination of the recurrent SSA. Our results support the idea of using this new parsimonious recurrent forecasting model instead of the standard recurrent SSA forecasting model.  相似文献   

3.
In this paper, an optimized multivariate singular spectrum analysis (MSSA) approach is proposed to find leading indicators of cross‐industry relations between 24 monthly, seasonally unadjusted industrial production (IP) series for German, French, and UK economies. Both recurrent and vector forecasting algorithms of horizontal MSSA (HMSSA) are considered. The results from the proposed multivariate approach are compared with those obtained via the optimized univariate singular spectrum analysis (SSA) forecasting algorithm to determine the statistical significance of each outcome. The data are rigorously tested for normality, seasonal unit root hypothesis, and structural breaks. The results are presented such that users can not only identify the most appropriate model based on the aim of the analysis, but also easily identify the leading indicators for each IP variable in each country. Our findings show that, for all three countries, forecasts from the proposed MSSA algorithm outperform the optimized SSA algorithm in over 70% of cases. Accordingly, this new approach succeeds in identifying leading indicators and is a viable option for selecting the SSA choices L and r, which minimizes a loss function.  相似文献   

4.
In recent years the singular spectrum analysis (SSA) technique has been further developed and applied to many practical problems. The aim of this research is to extend and apply the SSA method, using the UK Industrial Production series. The performance of the SSA and multivariate SSA (MSSA) techniques was assessed by applying it to eight series measuring the monthly seasonally unadjusted industrial production for the main sectors of the UK economy. The results are compared with those obtained using the autoregressive integrated moving average and vector autoregressive models. We also develop the concept of causal relationship between two time series based on the SSA techniques. We introduce several criteria which characterize this causality. The criteria and tests are based on the forecasting accuracy and predictability of the direction of change. The proposed tests are then applied and examined using the UK industrial production series. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

5.
The paper develops an oil price forecasting technique which is based on the present value model of rational commodity pricing. The approach suggests shifting the forecasting problem to the marginal convenience yield, which can be derived from the cost‐of‐carry relationship. In a recursive out‐of‐sample analysis, forecast accuracy at horizons within one year is checked by the root mean squared error as well as the mean error and the frequency of a correct direction‐of‐change prediction. For all criteria employed, the proposed forecasting tool outperforms the approach of using futures prices as direct predictors of future spot prices. Vis‐à‐vis the random‐walk model, it does not significantly improve forecast accuracy but provides valuable statements on the direction of change. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

6.
This study investigates whether human judgement can be of value to users of industrial learning curves, either alone or in conjunction with statistical models. In a laboratory setting, it compares the forecast accuracy of a statistical model and judgemental forecasts, contingent on three factors: the amount of data available prior to forecasting, the forecasting horizon, and the availability of a decision aid (projections from a fitted learning curve). The results indicate that human judgement was better than the curve forecasts overall. Despite their lack of field experience with learning curve use, 52 of the 79 subjects outperformed the curve on the set of 120 forecasts, based on mean absolute percentage error. Human performance was statistically superior to the model when few data points were available and when forecasting further into the future. These results indicate substantial potential for human judgement to improve predictive accuracy in the industrial learning‐curve context. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

7.
Estimation of the value at risk (VaR) requires prediction of the future volatility. Whereas this is a simple task in ARCH and related models, it becomes much more complicated in stochastic volatility (SV) processes where the volatility is a function of a latent variable that is not observable. In-sample (present and past values) and out-of-sample (future values) predictions of that unobservable variable are thus necessary. This paper proposes singular spectrum analysis (SSA), which is a fully nonparametric technique that can be used for both purposes. A combination of traditional forecasting techniques and SSA is also considered to estimate the VaR. Their performance is assessed in an extensive Monte Carlo and with an application to a daily series of S&P500 returns.  相似文献   

8.
Although both direct multi‐step‐ahead forecasting and iterated one‐step‐ahead forecasting are two popular methods for predicting future values of a time series, it is not clear that the direct method is superior in practice, even though from a theoretical perspective it has lower mean squared error (MSE). A given model can be fitted according to either a multi‐step or a one‐step forecast error criterion, and we show here that discrepancies in performance between direct and iterative forecasting arise chiefly from the method of fitting, and is dictated by the nuances of the model's misspecification. We derive new formulas for quantifying iterative forecast MSE, and present a new approach for assessing asymptotic forecast MSE. Finally, the direct and iterative methods are compared on a retail series, which illustrates the strengths and weaknesses of each approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
Recent research suggests that non-linear methods cannot improve the point forecasts of high-frequency exchange rates. These studies have been using standard forecasting criteria such as smallest mean squared error (MSE) and smallest mean absolute error (MAE). It is, however, premature to conclude from this evidence that non-linear forecasts of high-frequency financial returns are economically or statistically insignificant. We prove a proposition which implies that the standard forecasting criteria are not necessarily particularly suited for assessment of the economic value of predictions of non-linear processes where the predicted value and the prediction error may not be independently distributed. Adopting a simple non-linear forecasting procedure to 15 daily exchange rate series we find that although, when compared to simple random walk forecasts, all the non-linear forecasts give a higher MSE and MAE, when applied in a simple trading strategy these forecasts result in a higher mean return. It is also shown that the ranking of portfolio payoffs based on forecasts from a random walk, and linear and non-linear models, is closely related to a non-parametric test of market timing.  相似文献   

10.
Most non‐linear techniques give good in‐sample fits to exchange rate data but are usually outperformed by random walks or random walks with drift when used for out‐of‐sample forecasting. In the case of regime‐switching models it is possible to understand why forecasts based on the true model can have higher mean squared error than those of a random walk or random walk with drift. In this paper we provide some analytical results for the case of a simple switching model, the segmented trend model. It requires only a small misclassification, when forecasting which regime the world will be in, to lose any advantage from knowing the correct model specification. To illustrate this we discuss some results for the DM/dollar exchange rate. We conjecture that the forecasting result is more general and describes limitations to the use of switching models for forecasting. This result has two implications. First, it questions the leading role of the random walk hypothesis for the spot exchange rate. Second, it suggests that the mean square error is not an appropriate way to evaluate forecast performance for non‐linear models. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

11.
This study investigates the forecasting performance of the GARCH(1,1) model by adding an effective covariate. Based on the assumption that many volatility predictors are available to help forecast the volatility of a target variable, this study shows how to construct a covariate from these predictors and plug it into the GARCH(1,1) model. This study presents a method of building a covariate such that the covariate contains the maximum possible amount of predictor information of the predictors for forecasting volatility. The loading of the covariate constructed by the proposed method is simply the eigenvector of a matrix. The proposed method enjoys the advantages of easy implementation and interpretation. Simulations and empirical analysis verify that the proposed method performs better than other methods for forecasting the volatility, and the results are quite robust to model misspecification. Specifically, the proposed method reduces the mean square error of the GARCH(1,1) model by 30% for forecasting the volatility of S&P 500 Index. The proposed method is also useful in improving the volatility forecasting of several GARCH‐family models and for forecasting the value‐at‐risk. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
This paper investigates inference and volatility forecasting using a Markov switching heteroscedastic model with a fat‐tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. The motivation for extending the Markov switching GARCH model, previously developed to capture mean asymmetry, is that the switching variable, assumed to be a first‐order Markov process, is unobserved. The proposed model extends this work to incorporate Markov switching in the mean and variance simultaneously. Parameter estimation and inference are performed in a Bayesian framework via a Markov chain Monte Carlo scheme. We compare competing models using Bayesian forecasting in a comparative value‐at‐risk study. The proposed methods are illustrated using both simulations and eight international stock market return series. The results generally favor the proposed double Markov switching GARCH model with an exogenous variable. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
We develop a small model for forecasting inflation for the euro area using quarterly data over the period June 1973 to March 1999. The model is used to provide inflation forecasts from June 1999 to March 2002. We compare the forecasts from our model with those derived from six competing forecasting models, including autoregressions, vector autoregressions and Phillips‐curve based models. A considerable gain in forecasting performance is demonstrated using a relative root mean squared error criterion and the Diebold–Mariano test to make forecast comparisons. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

14.
This paper evaluates the performance of conditional variance models using high‐frequency data of the National Stock Index (S&P CNX NIFTY) and attempts to determine the optimal sampling frequency for the best daily volatility forecast. A linear combination of the realized volatilities calculated at two different frequencies is used as benchmark to evaluate the volatility forecasting ability of the conditional variance models (GARCH (1, 1)) at different sampling frequencies. From the analysis, it is found that sampling at 30 minutes gives the best forecast for daily volatility. The forecasting ability of these models is deteriorated, however, by the non‐normal property of mean adjusted returns, which is an assumption in conditional variance models. Nevertheless, the optimum frequency remained the same even in the case of different models (EGARCH and PARCH) and different error distribution (generalized error distribution, GED) where the error is reduced to a certain extent by incorporating the asymmetric effect on volatility. Our analysis also suggests that GARCH models with GED innovations or EGRACH and PARCH models would give better estimates of volatility with lower forecast error estimates. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
It has been suggested that a major problem for window selection when we estimate models for forecasting is to empirically determine the timing of the break. However, if the window choice between post‐break or full sample is based on mean square forecast error ratios, it is difficult to understand why such a problem arises since break detectability and these ratios seem to have the same determinants. This paper analyses this issue first for the expected values in conditional models and then by Monte Carlo simulations for more general cases. Results show similar behaviour between rejection frequencies and the ratios but only for break tests that do not take into account forecasting error covariances, as is the case with mean square forecast error measures. Moreover, the asymmetric shape of the frequency distribution of the ratios could help us to better grasp empirical problems. An illustration using actual data is given. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper we compare several multi‐period volatility forecasting models, specifically from MIDAS and HAR families. We perform our comparisons in terms of out‐of‐sample volatility forecasting accuracy. We also consider combinations of the models' forecasts. Using intra‐daily returns of the BOVESPA index, we calculate volatility measures such as realized variance, realized power variation and realized bipower variation to be used as regressors in both models. Further, we use a nonparametric procedure for separately measuring the continuous sample path variation and the discontinuous jump part of the quadratic variation process. Thus MIDAS and HAR specifications with the continuous sample path and jump variability measures as separate regressors are estimated. Our results in terms of mean squared error suggest that regressors involving volatility measures which are robust to jumps (i.e. realized bipower variation and realized power variation) are better at forecasting future volatility. However, we find that, in general, the forecasts based on these regressors are not statistically different from those based on realized variance (the benchmark regressor). Moreover, we find that, in general, the relative forecasting performances of the three approaches (i.e. MIDAS, HAR and forecast combinations) are statistically equivalent. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper, we present a comparison between the forecasting performances of the normalization and variance stabilization method (NoVaS) and the GARCH(1,1), EGARCH(1,1) and GJR‐GARCH(1,1) models. Hence the aim of this study is to compare the out‐of‐sample forecasting performances of the models used throughout the study and to show that the NoVaS method is better than GARCH(1,1)‐type models in the context of out‐of sample forecasting performance. We study the out‐of‐sample forecasting performances of GARCH(1,1)‐type models and NoVaS method based on generalized error distribution, unlike normal and Student's t‐distribution. Also, what makes the study different is the use of the return series, calculated logarithmically and arithmetically in terms of forecasting performance. For comparing the out‐of‐sample forecasting performances, we focused on different datasets, such as S&P 500, logarithmic and arithmetic B?ST 100 return series. The key result of our analysis is that the NoVaS method performs better out‐of‐sample forecasting performance than GARCH(1,1)‐type models. The result can offer useful guidance in model building for out‐of‐sample forecasting purposes, aimed at improving forecasting accuracy.  相似文献   

18.
Forecasting methods are often valued by means of simulation studies. For intermittent demand items there are often very few non–zero observations, so it is hard to check any assumptions, because statistical information is often too weak to determine, for example, distribution of a variable. Therefore, it seems important to verify the forecasting methods on the basis of real data. The main aim of the article is an empirical verification of several forecasting methods applicable in case of intermittent demand. Some items are sold only in specific subperiods (in given month in each year, for example), but most forecasting methods (such as Croston's method) give non–zero forecasts for all periods. For example, summer work clothes should have non–zero forecasts only for summer months and many methods will usually provide non–zero forecasts for all months under consideration. This was the motivation for proposing and testing a new forecasting technique which can be applicable to seasonal items. In the article six methods were applied to construct separate forecasting systems: Croston's, SBA (Syntetos–Boylan Approximation), TSB (Teunter, Syntetos, Babai), MA (Moving Average), SES (Simple Exponential Smoothing) and SESAP (Simple Exponential Smoothing for Analogous subPeriods). The latter method (SESAP) is an author's proposal dedicated for companies facing the problem of seasonal items. By analogous subperiods the same subperiods in each year are understood, for example, the same months in each year. A data set from the real company was used to apply all the above forecasting procedures. That data set contained monthly time series for about nine thousand products. The forecasts accuracy was tested by means of both parametric and non–parametric measures. The scaled mean and the scaled root mean squared error were used to check biasedness and efficiency. Also, the mean absolute scaled error and the shares of best forecasts were estimated. The general conclusion is that in the analyzed company a forecasting system should be based on two forecasting methods: TSB and SESAP, but the latter method should be applied only to seasonal items (products sold only in specific subperiods). It also turned out that Croston's and SBA methods work worse than much simpler methods, such as SES or MA. The presented analysis might be helpful for enterprises facing the problem of forecasting intermittent items (and seasonal intermittent items as well).  相似文献   

19.
This paper presents a comparative analysis of linear and mixed models for short‐term forecasting of a real data series with a high percentage of missing data. Data are the series of significant wave heights registered at regular periods of three hours by a buoy placed in the Bay of Biscay. The series is interpolated with a linear predictor which minimizes the forecast mean square error. The linear models are seasonal ARIMA models and the mixed models have a linear component and a non‐linear seasonal component. The non‐linear component is estimated by a non‐parametric regression of data versus time. Short‐term forecasts, no more than two days ahead, are of interest because they can be used by the port authorities to notify the fleet. Several models are fitted and compared by their forecasting behaviour. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

20.
It has been acknowledged that wavelets can constitute a useful tool for forecasting in economics. Through a wavelet multi‐resolution analysis, a time series can be decomposed into different timescale components and a model can be fitted to each component to improve the forecast accuracy of the series as a whole. Up to now, the literature on forecasting with wavelets has mainly focused on univariate modelling. On the other hand, in a context of growing data availability, a line of research has emerged on forecasting with large datasets. In particular, the use of factor‐augmented models have become quite widespread in the literature and among practitioners. The aim of this paper is to bridge the two strands of the literature. A wavelet approach for factor‐augmented forecasting is proposed and put to test for forecasting GDP growth for the major euro area countries. The results show that the forecasting performance is enhanced when wavelets and factor‐augmented models are used together. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号