首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
Using a structural time‐series model, the forecasting accuracy of a wide range of macroeconomic variables is investigated. Specifically of importance is whether the Henderson moving‐average procedure distorts the underlying time‐series properties of the data for forecasting purposes. Given the weight of attention in the literature to the seasonal adjustment process used by various statistical agencies, this study hopes to address the dearth of literature on ‘trending’ procedures. Forecasts using both the trended and untrended series are generated. The forecasts are then made comparable by ‘detrending’ the trended forecasts, and comparing both series to the realised values. Forecasting accuracy is measured by a suite of common methods, and a test of significance of difference is applied to the respective root mean square errors. It is found that the Henderson procedure does not lead to deterioration in forecasting accuracy in Australian macroeconomic variables on most occasions, though the conclusions are very different between the one‐step‐ahead and multi‐step‐ahead forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
M‐competition studies provide a set of stylized recommendations to enhance forecast reliability. However, no single method dominates across series, leading to consideration of the relationship between selected data characteristics and the reliability of alternative forecast methods. This study conducts an analysis of predictive accuracy in relation to Internet bandwidth loads. Extrapolation techniques that perform best in M‐competitions perform relatively poorly in predicting Internet bandwidth loads. Such performance is attributed to Internet bandwidth data exhibiting considerably less structure than M‐competition data. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper we present an intelligent decision‐support system based on neural network technology for model selection and forecasting. While most of the literature on the application of neural networks in forecasting addresses the use of neural network technology as an alternative forecasting tool, limited research has focused on its use for selection of forecasting methods based on time‐series characteristics. In this research, a neural network‐based decision support system is presented as a method for forecast model selection. The neural network approach provides a framework for directly incorporating time‐series characteristics into the model‐selection phase. Using a neural network, a forecasting group is initially selected for a given data set, based on a set of time‐series characteristics. Then, using an additional neural network, a specific forecasting method is selected from a pool of three candidate methods. The results of training and testing of the networks are presented along with conclusions. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

5.
This paper analyzes the relative performance of multi‐step AR forecasting methods in the presence of breaks and data revisions. Our Monte Carlo simulations indicate that the type and timing of the break affect the relative accuracy of the methods. The iterated autoregressive method typically produces more accurate point and density forecasts than the alternative multi‐step AR methods in unstable environments, especially if the parameters are subject to small breaks. This result holds regardless of whether data revisions add news or reduce noise. Empirical analysis of real‐time US output and inflation series shows that the alternative multi‐step methods only episodically improve upon the iterated method.  相似文献   

6.
It has been acknowledged that wavelets can constitute a useful tool for forecasting in economics. Through a wavelet multi‐resolution analysis, a time series can be decomposed into different timescale components and a model can be fitted to each component to improve the forecast accuracy of the series as a whole. Up to now, the literature on forecasting with wavelets has mainly focused on univariate modelling. On the other hand, in a context of growing data availability, a line of research has emerged on forecasting with large datasets. In particular, the use of factor‐augmented models have become quite widespread in the literature and among practitioners. The aim of this paper is to bridge the two strands of the literature. A wavelet approach for factor‐augmented forecasting is proposed and put to test for forecasting GDP growth for the major euro area countries. The results show that the forecasting performance is enhanced when wavelets and factor‐augmented models are used together. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

7.
This paper considers the problem of forecasting high‐dimensional time series. It employs a robust clustering approach to perform classification of the component series. Each series within a cluster is assumed to follow the same model and the data are then pooled for estimation. The classification is model‐based and robust to outlier contamination. The robustness is achieved by using the intrinsic mode functions of the Hilbert–Huang transform at lower frequencies. These functions are found to be robust to outlier contamination. The paper also compares out‐of‐sample forecast performance of the proposed method with several methods available in the literature. The other forecasting methods considered include vector autoregressive models with ∕ without LASSO, group LASSO, principal component regression, and partial least squares. The proposed method is found to perform well in out‐of‐sample forecasting of the monthly unemployment rates of 50 US states. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
Time series of categorical data is not a widely studied research topic. Particularly, there is no available work on the Bayesian analysis of categorical time series processes. With the objective of filling that gap, in the present paper we consider the problem of Bayesian analysis including Bayesian forecasting for time series of categorical data, which is modelled by Pegram's mixing operator, applicable for both ordinal and nominal data structures. In particular, we consider Pegram's operator‐based autoregressive process for the analysis. Real datasets on infant sleep status are analysed for illustrations. We also illustrate that the Bayesian forecasting is more accurate than the corresponding frequentist's approach when we intend to forecast a large time gap ahead. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
Data are now readily available for a very large number of macroeconomic variables that are potentially useful when forecasting. We argue that recent developments in the theory of dynamic factor models enable such large data sets to be summarized by relatively few estimated factors, which can then be used to improve forecast accuracy. In this paper we construct a large macroeconomic data set for the UK, with about 80 variables, model it using a dynamic factor model, and compare the resulting forecasts with those from a set of standard time‐series models. We find that just six factors are sufficient to explain 50% of the variability of all the variables in the data set. These factors, which can be shown to be related to key variables in the economy, and their use leads to considerable improvements upon standard time‐series benchmarks in terms of forecasting performance. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
Bayesian methods for assessing the accuracy of dynamic financial value‐at‐risk (VaR) forecasts have not been considered in the literature. Such methods are proposed in this paper. Specifically, Bayes factor analogues of popular frequentist tests for independence of violations from, and for correct coverage of a time series of, dynamic quantile forecasts are developed. To evaluate the relevant marginal likelihoods, analytic integration methods are utilized when possible; otherwise multivariate adaptive quadrature methods are employed to estimate the required quantities. The usual Bayesian interval estimate for a proportion is also examined in this context. The size and power properties of the proposed methods are examined via a simulation study, illustrating favourable comparisons both overall and with their frequentist counterparts. An empirical study employs the proposed methods, in comparison with standard tests, to assess the adequacy of a range of forecasting models for VaR in several financial market data series. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
For forecasting nonstationary and nonlinear energy prices time series, a novel adaptive multiscale ensemble learning paradigm incorporating ensemble empirical mode decomposition (EEMD), particle swarm optimization (PSO) and least square support vector machines (LSSVM) with kernel function prototype is developed. Firstly, the extrema symmetry expansion EEMD, which can effectively restrain the mode mixing and end effects, is used to decompose the energy price into simple modes. Secondly, by using the fine‐to‐coarse reconstruction algorithm, the high‐frequency, low‐frequency and trend components are identified. Furthermore, autoregressive integrated moving average is applicable to predicting the high‐frequency components. LSSVM is suitable for forecasting the low‐frequency and trend components. At the same time, a universal kernel function prototype is introduced for making up the drawbacks of single kernel function, which can adaptively select the optimal kernel function type and model parameters according to the specific data using the PSO algorithm. Finally, the prediction results of all the components are aggregated into the forecasting values of energy price time series. The empirical results show that, compared with the popular prediction methods, the proposed method can significantly improve the prediction accuracy of energy prices, with high accuracy both in the level and directional predictions. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
Forecasting methods are often valued by means of simulation studies. For intermittent demand items there are often very few non–zero observations, so it is hard to check any assumptions, because statistical information is often too weak to determine, for example, distribution of a variable. Therefore, it seems important to verify the forecasting methods on the basis of real data. The main aim of the article is an empirical verification of several forecasting methods applicable in case of intermittent demand. Some items are sold only in specific subperiods (in given month in each year, for example), but most forecasting methods (such as Croston's method) give non–zero forecasts for all periods. For example, summer work clothes should have non–zero forecasts only for summer months and many methods will usually provide non–zero forecasts for all months under consideration. This was the motivation for proposing and testing a new forecasting technique which can be applicable to seasonal items. In the article six methods were applied to construct separate forecasting systems: Croston's, SBA (Syntetos–Boylan Approximation), TSB (Teunter, Syntetos, Babai), MA (Moving Average), SES (Simple Exponential Smoothing) and SESAP (Simple Exponential Smoothing for Analogous subPeriods). The latter method (SESAP) is an author's proposal dedicated for companies facing the problem of seasonal items. By analogous subperiods the same subperiods in each year are understood, for example, the same months in each year. A data set from the real company was used to apply all the above forecasting procedures. That data set contained monthly time series for about nine thousand products. The forecasts accuracy was tested by means of both parametric and non–parametric measures. The scaled mean and the scaled root mean squared error were used to check biasedness and efficiency. Also, the mean absolute scaled error and the shares of best forecasts were estimated. The general conclusion is that in the analyzed company a forecasting system should be based on two forecasting methods: TSB and SESAP, but the latter method should be applied only to seasonal items (products sold only in specific subperiods). It also turned out that Croston's and SBA methods work worse than much simpler methods, such as SES or MA. The presented analysis might be helpful for enterprises facing the problem of forecasting intermittent items (and seasonal intermittent items as well).  相似文献   

13.
Recent research has suggested that forecast evaluation on the basis of standard statistical loss functions could prefer models which are sub‐optimal when used in a practical setting. This paper explores a number of statistical models for predicting the daily volatility of several key UK financial time series. The out‐of‐sample forecasting performance of various linear and GARCH‐type models of volatility are compared with forecasts derived from a multivariate approach. The forecasts are evaluated using traditional metrics, such as mean squared error, and also by how adequately they perform in a modern risk management setting. We find that the relative accuracies of the various methods are highly sensitive to the measure used to evaluate them. Such results have implications for any econometric time series forecasts which are subsequently employed in financial decision making. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

14.
Although both direct multi‐step‐ahead forecasting and iterated one‐step‐ahead forecasting are two popular methods for predicting future values of a time series, it is not clear that the direct method is superior in practice, even though from a theoretical perspective it has lower mean squared error (MSE). A given model can be fitted according to either a multi‐step or a one‐step forecast error criterion, and we show here that discrepancies in performance between direct and iterative forecasting arise chiefly from the method of fitting, and is dictated by the nuances of the model's misspecification. We derive new formulas for quantifying iterative forecast MSE, and present a new approach for assessing asymptotic forecast MSE. Finally, the direct and iterative methods are compared on a retail series, which illustrates the strengths and weaknesses of each approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
Forecasting for nonlinear time series is an important topic in time series analysis. Existing numerical algorithms for multi‐step‐ahead forecasting ignore accuracy checking, alternative Monte Carlo methods are also computationally very demanding and their accuracy is difficult to control too. In this paper a numerical forecasting procedure for nonlinear autoregressive time series models is proposed. The forecasting procedure can be used to obtain approximate m‐step‐ahead predictive probability density functions, predictive distribution functions, predictive mean and variance, etc. for a range of nonlinear autoregressive time series models. Examples in the paper show that the forecasting procedure works very well both in terms of the accuracy of the results and in the ability to deal with different nonlinear autoregressive time series models. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
We investigate the prediction of italian industrial production and first specify a model based on electricity consumption showing that the cubic trend in such a model mostly captures the evolution over time of the electricity coefficient, which can be well approximated by a smooth transition model, with no gains in predictive power. We also analyse the performance of models based on data of two different business surveys. According to the standard statistics of forecasting accuracy, the linear energy‐based model is not outperformed by any other model, nor by a combination of forecasts. However, a more comprehensive set of evaluation criteria sheds light on the relative merit of each individual model. A modelling strategy which makes full use of all information available is proposed. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper, we use Google Trends data for exchange rate forecasting in the context of a broad literature review that ties the exchange rate movements with macroeconomic fundamentals. The sample covers 11 OECD countries’ exchange rates for the period from January 2004 to June 2014. In out‐of‐sample forecasting of monthly returns on exchange rates, our findings indicate that the Google Trends search query data do a better job than the structural models in predicting the true direction of changes in nominal exchange rates. We also observed that Google Trends‐based forecasts are better at picking up the direction of the changes in the monthly nominal exchange rates after the Great Recession era (2008–2009). Based on the Clark and West inference procedure of equal predictive accuracy testing, we found that the relative performance of Google Trends‐based exchange rate predictions against the null of a random walk model is no worse than the purchasing power parity model. On the other hand, although the monetary model fundamentals could beat the random walk null only in one out of 11 currency pairs, with Google Trends predictors we found evidence of better performance for five currency pairs. We believe that these findings necessitate further research in this area to investigate the extravalue one can get from Google search query data.  相似文献   

18.
This paper proposes and implements a new methodology for forecasting time series, based on bicorrelations and cross‐bicorrelations. It is shown that the forecasting technique arises as a natural extension of, and as a complement to, existing univariate and multivariate non‐linearity tests. The formulations are essentially modified autoregressive or vector autoregressive models respectively, which can be estimated using ordinary least squares. The techniques are applied to a set of high‐frequency exchange rate returns, and their out‐of‐sample forecasting performance is compared to that of other time series models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
Forecasting for a time series of low counts, such as forecasting the number of patents to be awarded to an industry, is an important research topic in socio‐economic sectors. Recently (2004), Freeland and McCabe introduced a Gaussian type stationary correlation model‐based forecasting which appears to work well for the stationary time series of low counts. In practice, however, it may happen that the time series of counts will be non‐stationary and also the series may contain over‐dispersed counts. To develop the forecasting functions for this type of non‐stationary over‐dispersed data, the paper provides an extension of the stationary correlation models for Poisson counts to the non‐stationary correlation models for negative binomial counts. The forecasting methodology appears to work well, for example, for a US time series of polio counts, whereas the existing Bayesian methods of forecasting appear to encounter serious convergence problems. Further, a simulation study is conducted to examine the performance of the proposed forecasting functions, which appear to work well irrespective of whether the time series contains small or large counts. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
Artificial neural network (ANN) combined with signal decomposing methods is effective for long‐term streamflow time series forecasting. ANN is a kind of machine learning method utilized widely for streamflow time series, and which performs well in forecasting nonstationary time series without the need of physical analysis for complex and dynamic hydrological processes. Most studies take multiple factors determining the streamflow as inputs such as rainfall. In this study, a long‐term streamflow forecasting model depending only on the historical streamflow data is proposed. Various preprocessing techniques, including empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD) and discrete wavelet transform (DWT), are first used to decompose the streamflow time series into simple components with different timescale characteristics, and the relation between these components and the original streamflow at the next time step is analyzed by ANN. Hybrid models EMD‐ANN, EEMD‐ANN and DWT‐ANN are developed in this study for long‐term daily streamflow forecasting, and performance measures root mean square error (RMSE), mean absolute percentage error (MAPE) and Nash–Sutcliffe efficiency (NSE) indicate that the proposed EEMD‐ANN method performs better than EMD‐ANN and DWT‐ANN models, especially in high flow forecasting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号