首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
It often occurs that no model may be exactly right, and that different portions of the data may favour different models. The purpose of this paper is to propose a new procedure for the detection of regime switches between stationary and nonstationary processes in economic time series and to show its usefulness in economic forecasting. In the proposed procedure, time series observations are divided into several segments, and a stationary or nonstationary autoregressive model is fitted to each segment. The goodness of fit of the global model composed of these local models is evaluated using the corresponding information criterion, and the division which minimizes the information criterion defines the best model. Simulation and forecasting results show the efficacy and limitations of the proposed procedure. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
Compared with point forecasting, interval forecasting is believed to be more effective and helpful in decision making, as it provides more information about the data generation process. Based on the well-established “linear and nonlinear” modeling framework, a hybrid model is proposed by coupling the vector error correction model (VECM) with artificial intelligence models which consider the cointegration relationship between the lower and upper bounds (Coin-AIs). VECM is first employed to fit the original time series with the residual error series modeled by Coin-AIs. Using pork price as a research sample, the empirical results statistically confirm the superiority of the proposed VECM-CoinAIs over other competing models, which include six single models and six hybrid models. This result suggests that considering the cointegration relationship is a workable direction for improving the forecast performance of the interval-valued time series. Moreover, with a reasonable data transformation process, interval forecasting is proven to be more accurate than point forecasting.  相似文献   

3.
In this paper we discuss procedures for overcoming some of the problems involved in fitting autoregressive integrated moving average forecasting models to time series data, when the possibility of incorporating an instantaneous power transformation of the data into the analysis is contemplated. The procedures are illustrated using series of quarterly observations on corporate earnings per share.  相似文献   

4.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
Through empirical research, it is found that the traditional autoregressive integrated moving average (ARIMA) model has a large deviation for the forecasting of high-frequency financial time series. With the improvement in storage capacity and computing power of high-frequency financial time series, this paper combines the traditional ARIMA model with the deep learning model to forecast high-frequency financial time series. It not only preserves the theoretical basis of the traditional model and characterizes the linear relationship, but also can characterize the nonlinear relationship of the error term according to the deep learning model. The empirical study of Monte Carlo numerical simulation and CSI 300 index in China show that, compared with ARIMA, support vector machine (SVM), long short-term memory (LSTM) and ARIMA-SVM models, the improved ARIMA model based on LSTM not only improves the forecasting accuracy of the single ARIMA model in both fitting and forecasting, but also reduces the computational complexity of only a single deep learning model. The improved ARIMA model based on deep learning not only enriches the models for the forecasting of time series, but also provides effective tools for high-frequency strategy design to reduce the investment risks of stock index.  相似文献   

6.
‘Bayesian forecasting’ is a time series method of forecasting which (in the United Kingdom) has become synonymous with the state space formulation of Harrison and Stevens (1976). The approach is distinct from other time series methods in that it envisages changes in model structure. A disjoint class of models is chosen to encompass the changes. Each data point is retrospectively evaluated (using Bayes theorem) to judge which of the models held. Forecasts are then derived conditional on an assumed model holding true. The final forecasts are weighted sums of these conditional forecasts. Few empirical evaluations have been carried out. This paper reports a large scale comparison of time series forecasting methods including the Bayesian. The approach is two fold: a simulation study to examine parameter sensitivity and an empirical study which contrasts Bayesian with other time series methods.  相似文献   

7.
This paper proposes a new forecasting method in which the cointegration rank switches at unknown times. In this method, time series observations are divided into several segments, and a cointegrated vector autoregressive model is fitted to each segment. The goodness of fit of the global model, consisting of local models with different cointegration ranks, is evaluated using the information criterion (IC). The division that minimizes the IC defines the best model. The results of an empirical application to the US term structure of interest rates and a Monte Carlo simulation suggest the efficacy as well as the limitations of the proposed method. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
Artificial neural network (ANN) combined with signal decomposing methods is effective for long‐term streamflow time series forecasting. ANN is a kind of machine learning method utilized widely for streamflow time series, and which performs well in forecasting nonstationary time series without the need of physical analysis for complex and dynamic hydrological processes. Most studies take multiple factors determining the streamflow as inputs such as rainfall. In this study, a long‐term streamflow forecasting model depending only on the historical streamflow data is proposed. Various preprocessing techniques, including empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD) and discrete wavelet transform (DWT), are first used to decompose the streamflow time series into simple components with different timescale characteristics, and the relation between these components and the original streamflow at the next time step is analyzed by ANN. Hybrid models EMD‐ANN, EEMD‐ANN and DWT‐ANN are developed in this study for long‐term daily streamflow forecasting, and performance measures root mean square error (RMSE), mean absolute percentage error (MAPE) and Nash–Sutcliffe efficiency (NSE) indicate that the proposed EEMD‐ANN method performs better than EMD‐ANN and DWT‐ANN models, especially in high flow forecasting.  相似文献   

10.
An underlying assumption in Multivariate Singular Spectrum Analysis (MSSA) is that the time series are governed by a linear recurrent continuation. However, in the presence of a structural break, multiple series can be transferred from one homogeneous state to another over a comparatively short time breaking this assumption. As a consequence, forecasting performance can degrade significantly. In this paper, we propose a state-dependent model to incorporate the movement of states in the linear recurrent formula called a State-Dependent Multivariate SSA (SD-MSSA) model. The proposed model is examined for its reliability in the presence of a structural break by conducting an empirical analysis covering both synthetic and real data. Comparison with standard MSSA, BVAR, VAR and VECM models shows the proposed model outperforms all three models significantly.  相似文献   

11.
Dynamic model averaging (DMA) is used extensively for the purpose of economic forecasting. This study extends the framework of DMA by introducing adaptive learning from model space. In the conventional DMA framework all models are estimated independently and hence the information of the other models is left unexploited. In order to exploit the information in the estimation of the individual time‐varying parameter models, this paper proposes not only to average over the forecasts but, in addition, also to dynamically average over the time‐varying parameters. This is done by approximating the mixture of individual posteriors with a single posterior, which is then used in the upcoming period as the prior for each of the individual models. The relevance of this extension is illustrated in three empirical examples involving forecasting US inflation, US consumption expenditures, and forecasting of five major US exchange rate returns. In all applications adaptive learning from model space delivers improvements in out‐of‐sample forecasting performance.  相似文献   

12.
Wind power production data at temporal resolutions of a few minutes exhibit successive periods with fluctuations of various dynamic nature and magnitude, which cannot be explained (so far) by the evolution of some explanatory variable. Our proposal is to capture this regime‐switching behaviour with an approach relying on Markov‐switching autoregressive (MSAR) models. An appropriate parameterization of the model coefficients is introduced, along with an adaptive estimation method allowing accommodation of long‐term variations in the process characteristics. The objective criterion to be recursively optimized is based on penalized maximum likelihood, with exponential forgetting of past observations. MSAR models are then employed for one‐step‐ahead point forecasting of 10 min resolution time series of wind power at two large offshore wind farms. They are favourably compared against persistence and autoregressive models. It is finally shown that the main interest of MSAR models lies in their ability to generate interval/density forecasts of significantly higher skill. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

13.
This paper presents a methodology for modelling and forecasting multivariate time series with linear restrictions using the constrained structural state‐space framework. The model has natural applications to forecasting time series of macroeconomic/financial identities and accounts. The explicit modelling of the constraints ensures that model parameters dynamically satisfy the restrictions among items of the series, leading to more accurate and internally consistent forecasts. It is shown that the constrained model offers superior forecasting efficiency. A testable identification condition for state space models is also obtained and applied to establish the identifiability of the constrained model. The proposed methods are illustrated on Germany's quarterly monetary accounts data. Results show significant improvement in the predictive efficiency of forecast estimators for the monetary account with an overall efficiency gain of 25% over unconstrained modelling. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

14.
Success in forecasting using mathematical/statistical models requires that the models be open to intervention by the user. In practice, a model is only one component of a forecasting system, which also includes the users/forecasters as integral components. Interaction between the user and the model is necessary to adequately cater for events and changes that go beyond the existing form of the model. In this paper we consider Bayesian forecasting models open to interventions, of essentially any form, to incorporate subjective information made available to the user. We discuss principles of intervention and derive theoretical results that provide the means to formally incorporate feedforward interventions into Bayesian models. Two example time series are considered to illustrate why and when such interventions may be necessary to sustain predictive performance.  相似文献   

15.
With the development of artificial intelligence, deep learning is widely used in the field of nonlinear time series forecasting. It is proved in practice that deep learning models have higher forecasting accuracy compared with traditional linear econometric models and machine learning models. With the purpose of further improving forecasting accuracy of financial time series, we propose the WT-FCD-MLGRU model, which is the combination of wavelet transform, filter cycle decomposition and multilag neural networks. Four major stock indices are chosen to test the forecasting performance among traditional econometric model, machine learning model and deep learning models. According to the result of empirical analysis, deep learning models perform better than traditional econometric model such as autoregressive integrated moving average and improved machine learning model SVR. Besides, our proposed model has the minimum forecasting error in stock index prediction.  相似文献   

16.
We propose a wavelet neural network (neuro‐wavelet) model for the short‐term forecast of stock returns from high‐frequency financial data. The proposed hybrid model combines the capability of wavelets and neural networks to capture non‐stationary nonlinear attributes embedded in financial time series. A comparison study was performed on the predictive power of two econometric models and four recurrent neural network topologies. Several statistical measures were applied to the predictions and standard errors to evaluate the performance of all models. A Jordan net that used as input the coefficients resulting from a non‐decimated wavelet‐based multi‐resolution decomposition of an exogenous signal showed a consistent superior forecasting performance. Reasonable forecasting accuracy for the one‐, three‐ and five step‐ahead horizons was achieved by the proposed model. The procedure used to build the neuro‐wavelet model is reusable and can be applied to any high‐frequency financial series to specify the model characteristics associated with that particular series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
This study empirically examines the role of macroeconomic and stock market variables in the dynamic Nelson–Siegel framework with the purpose of fitting and forecasting the term structure of interest rate on the Japanese government bond market. The Nelson–Siegel type models in state‐space framework considerably outperform the benchmark simple time series forecast models such as an AR(1) and a random walk. The yields‐macro model incorporating macroeconomic factors leads to a better in‐sample fit of the term structure than the yields‐only model. The out‐of‐sample predictability of the former for short‐horizon forecasts is superior to the latter for all maturities examined in this study, and for longer horizons the former is still compatible to the latter. Inclusion of macroeconomic factors can dramatically reduce the autocorrelation of forecast errors, which has been a common phenomenon of statistical analysis in previous term structure models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
This paper discusses the forecasting performance of alternative factor models based on a large panel of quarterly time series for the German economy. One model extracts factors by static principal components analysis; the second model is based on dynamic principal components obtained using frequency domain methods; the third model is based on subspace algorithms for state‐space models. Out‐of‐sample forecasts show that the forecast errors of the factor models are on average smaller than the errors of a simple autoregressive benchmark model. Among the factor models, the dynamic principal component model and the subspace factor model outperform the static factor model in most cases in terms of mean‐squared forecast error. However, the forecast performance depends crucially on the choice of appropriate information criteria for the auxiliary parameters of the models. In the case of misspecification, rankings of forecast performance can change severely. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
Forecasting for a time series of low counts, such as forecasting the number of patents to be awarded to an industry, is an important research topic in socio‐economic sectors. Recently (2004), Freeland and McCabe introduced a Gaussian type stationary correlation model‐based forecasting which appears to work well for the stationary time series of low counts. In practice, however, it may happen that the time series of counts will be non‐stationary and also the series may contain over‐dispersed counts. To develop the forecasting functions for this type of non‐stationary over‐dispersed data, the paper provides an extension of the stationary correlation models for Poisson counts to the non‐stationary correlation models for negative binomial counts. The forecasting methodology appears to work well, for example, for a US time series of polio counts, whereas the existing Bayesian methods of forecasting appear to encounter serious convergence problems. Further, a simulation study is conducted to examine the performance of the proposed forecasting functions, which appear to work well irrespective of whether the time series contains small or large counts. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号