首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We propose a wavelet neural network (neuro‐wavelet) model for the short‐term forecast of stock returns from high‐frequency financial data. The proposed hybrid model combines the capability of wavelets and neural networks to capture non‐stationary nonlinear attributes embedded in financial time series. A comparison study was performed on the predictive power of two econometric models and four recurrent neural network topologies. Several statistical measures were applied to the predictions and standard errors to evaluate the performance of all models. A Jordan net that used as input the coefficients resulting from a non‐decimated wavelet‐based multi‐resolution decomposition of an exogenous signal showed a consistent superior forecasting performance. Reasonable forecasting accuracy for the one‐, three‐ and five step‐ahead horizons was achieved by the proposed model. The procedure used to build the neuro‐wavelet model is reusable and can be applied to any high‐frequency financial series to specify the model characteristics associated with that particular series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
Micro‐founded dynamic stochastic general equilibrium (DSGE) models appear to be particularly suited to evaluating the consequences of alternative macroeconomic policies. Recently, increasing efforts have been undertaken by policymakers to use these models for forecasting, although this proved to be problematic due to estimation and identification issues. Hybrid DSGE models have become popular for dealing with some of the model misspecifications and the trade‐off between theoretical coherence and empirical fit, thus allowing them to compete in terms of predictability with VAR models. However, DSGE and VAR models are still linear and they do not consider time variation in parameters that could account for inherent nonlinearities and capture the adaptive underlying structure of the economy in a robust manner. This study conducts a comparative evaluation of the out‐of‐sample predictive performance of many different specifications of DSGE models and various classes of VAR models, using datasets for the real GDP, the harmonized CPI and the nominal short‐term interest rate series in the euro area. Simple and hybrid DSGE models were implemented, including DSGE‐VAR and factor‐augmented DGSE, and tested against standard, Bayesian and factor‐augmented VARs. Moreover, a new state‐space time‐varying VAR model is presented. The total period spanned from 1970:Q1 to 2010:Q4 with an out‐of‐sample testing period of 2006:Q1–2010:Q4, which covers the global financial crisis and the EU debt crisis. The results of this study can be useful in conducting monetary policy analysis and macro‐forecasting in the euro area. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
The simplicity of the standard diffusion index model of Stock and Watson has certainly contributed to its success among practitioners, resulting in a growing body of literature on factor‐augmented forecasts. However, as pointed out by Bai and Ng, the ranked factors considered in the forecasting equation depend neither on the variable to be forecast nor on the forecasting horizon. We propose a refinement of the standard approach that retains the computational simplicity while coping with this limitation. Our approach consists of generating a weighted average of all the principal components, the weights depending both on the eigenvalues of the sample correlation matrix and on the covariance between the estimated factor and the targeted variable at the relevant horizon. This ‘targeted diffusion index’ approach is applied to US data and the results show that it outperforms considerably the standard approach in forecasting several major macroeconomic series. Moreover, the improvement is more significant in the final part of the forecasting evaluation period. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
This article applies two novel techniques to forecast the value of US manufacturing shipments over the period 1956–2000: wavelets and support vector machines (SVM). Wavelets have become increasingly popular in the fields of economics and finance in recent years, whereas SVM has emerged as a more user‐friendly alternative to artificial neural networks. These two methodologies are compared with two well‐known time series techniques: multiplicative seasonal autoregressive integrated moving average (ARIMA) and unobserved components (UC). Based on forecasting accuracy and encompassing tests, and forecasting combination, we conclude that UC and ARIMA generally outperform wavelets and SVM. However, in some cases the latter provide valuable forecasting information that it is not contained in the former. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
We develop a semi‐structural model for forecasting inflation in the UK in which the New Keynesian Phillips curve (NKPC) is augmented with a time series model for marginal cost. By combining structural and time series elements we hope to reap the benefits of both approaches, namely the relatively better forecasting performance of time series models in the short run and a theory‐consistent economic interpretation of the forecast coming from the structural model. In our model we consider the hybrid version of the NKPC and use an open‐economy measure of marginal cost. The results suggest that our semi‐structural model performs better than a random‐walk forecast and most of the competing models (conventional time series models and strictly structural models) only in the short run (one quarter ahead) but it is outperformed by some of the competing models at medium and long forecast horizons (four and eight quarters ahead). In addition, the open‐economy specification of our semi‐structural model delivers more accurate forecasts than its closed‐economy alternative at all horizons. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Forecasting for nonlinear time series is an important topic in time series analysis. Existing numerical algorithms for multi‐step‐ahead forecasting ignore accuracy checking, alternative Monte Carlo methods are also computationally very demanding and their accuracy is difficult to control too. In this paper a numerical forecasting procedure for nonlinear autoregressive time series models is proposed. The forecasting procedure can be used to obtain approximate m‐step‐ahead predictive probability density functions, predictive distribution functions, predictive mean and variance, etc. for a range of nonlinear autoregressive time series models. Examples in the paper show that the forecasting procedure works very well both in terms of the accuracy of the results and in the ability to deal with different nonlinear autoregressive time series models. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
Artificial neural network (ANN) combined with signal decomposing methods is effective for long‐term streamflow time series forecasting. ANN is a kind of machine learning method utilized widely for streamflow time series, and which performs well in forecasting nonstationary time series without the need of physical analysis for complex and dynamic hydrological processes. Most studies take multiple factors determining the streamflow as inputs such as rainfall. In this study, a long‐term streamflow forecasting model depending only on the historical streamflow data is proposed. Various preprocessing techniques, including empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD) and discrete wavelet transform (DWT), are first used to decompose the streamflow time series into simple components with different timescale characteristics, and the relation between these components and the original streamflow at the next time step is analyzed by ANN. Hybrid models EMD‐ANN, EEMD‐ANN and DWT‐ANN are developed in this study for long‐term daily streamflow forecasting, and performance measures root mean square error (RMSE), mean absolute percentage error (MAPE) and Nash–Sutcliffe efficiency (NSE) indicate that the proposed EEMD‐ANN method performs better than EMD‐ANN and DWT‐ANN models, especially in high flow forecasting.  相似文献   

9.
This paper proposes and implements a new methodology for forecasting time series, based on bicorrelations and cross‐bicorrelations. It is shown that the forecasting technique arises as a natural extension of, and as a complement to, existing univariate and multivariate non‐linearity tests. The formulations are essentially modified autoregressive or vector autoregressive models respectively, which can be estimated using ordinary least squares. The techniques are applied to a set of high‐frequency exchange rate returns, and their out‐of‐sample forecasting performance is compared to that of other time series models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
Asymmetry has been well documented in the business cycle literature. The asymmetric business cycle suggests that major macroeconomic series, such as a country's unemployment rate, are non‐linear and, therefore, the use of linear models to explain their behaviour and forecast their future values may not be appropriate. Many researchers have focused on providing evidence for the non‐linearity in the unemployment series. Only recently have there been some developments in applying non‐linear models to estimate and forecast unemployment rates. A major concern of non‐linear modelling is the model specification problem; it is very hard to test all possible non‐linear specifications, and to select the most appropriate specification for a particular model. Artificial neural network (ANN) models provide a solution to the difficulty of forecasting unemployment over the asymmetric business cycle. ANN models are non‐linear, do not rely upon the classical regression assumptions, are capable of learning the structure of all kinds of patterns in a data set with a specified degree of accuracy, and can then use this structure to forecast future values of the data. In this paper, we apply two ANN models, a back‐propagation model and a generalized regression neural network model to estimate and forecast post‐war aggregate unemployment rates in the USA, Canada, UK, France and Japan. We compare the out‐of‐sample forecast results obtained by the ANN models with those obtained by several linear and non‐linear times series models currently used in the literature. It is shown that the artificial neural network models are able to forecast the unemployment series as well as, and in some cases better than, the other univariate econometrics time series models in our test. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
This paper concentrates on comparing estimation and forecasting ability of quasi‐maximum likelihood (QML) and support vector machines (SVM) for financial data. The financial series are fitted into a family of asymmetric power ARCH (APARCH) models. As the skewness and kurtosis are common characteristics of the financial series, a skew‐t distributed innovation is assumed to model the fat tail and asymmetry. Prior research indicates that the QML estimator for the APARCH model is inefficient when the data distribution shows departure from normality, so the current paper utilizes the semi‐parametric‐based SVM method and shows that it is more efficient than the QML under the skewed Student's‐t distributed error. As the SVM is a kernel‐based technique, we further investigate its performance by applying separately a Gaussian kernel and a wavelet kernel. The results suggest that the SVM‐based method generally performs better than QML for both in‐sample and out‐of‐sample data. The outcomes also highlight the fact that the wavelet kernel outperforms the Gaussian kernel with lower forecasting error, better generation capability and more computation efficiency. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Data are now readily available for a very large number of macroeconomic variables that are potentially useful when forecasting. We argue that recent developments in the theory of dynamic factor models enable such large data sets to be summarized by relatively few estimated factors, which can then be used to improve forecast accuracy. In this paper we construct a large macroeconomic data set for the UK, with about 80 variables, model it using a dynamic factor model, and compare the resulting forecasts with those from a set of standard time‐series models. We find that just six factors are sufficient to explain 50% of the variability of all the variables in the data set. These factors, which can be shown to be related to key variables in the economy, and their use leads to considerable improvements upon standard time‐series benchmarks in terms of forecasting performance. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
While in speculative markets forward prices could be regarded as natural predictors for future spot rates, empirically, forward prices often fail to indicate ex ante the direction of price movements. In terms of forecasting, the random walk approximation of speculative prices has been established to provide ‘naive’ predictors that are most difficult to outperform by both purely backward‐looking time series models and more structural approaches processing information from forward markets. We empirically assess the implicit predictive content of forward prices by means of wavelet‐based prediction of two foreign exchange (FX) rates and the price of Brent oil quoted either in US dollars or euros. Essentially, wavelet‐based predictors are smoothed auxiliary (padded) time series quotes that are added to the sample information beyond the forecast origin. We compare wavelet predictors obtained from padding with constant prices (i.e. random walk predictors) and forward prices. For the case of FX markets, padding with forward prices is more effective than padding with constant prices, and, moreover, respective wavelet‐based predictors outperform purely backward‐looking time series approaches (ARIMA). For the case of Brent oil quoted in US dollars, wavelet‐based predictors do not signal predictive content of forward prices for future spot prices. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
This paper performs a large‐scale forecast evaluation exercise to assess the performance of different models for the short‐term forecasting of GDP, resorting to large datasets from ten European countries. Several versions of factor models are considered and cross‐country evidence is provided. The forecasting exercise is performed in a simulated real‐time context, which takes account of publication lags in the individual series. In general, we find that factor models perform best and models that exploit monthly information outperform models that use purely quarterly data. However, the improvement over the simpler, quarterly models remains contained. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
Reid (1972) was among the first to argue that the relative accuracy of forecasting methods changes according to the properties of the time series. Comparative analyses of forecasting performance such as the M‐Competition tend to support this argument. The issue addressed here is the usefulness of statistics summarizing the data available in a time series in predicting the relative accuracy of different forecasting methods. Nine forecasting methods are described and the literature suggesting summary statistics for choice of forecasting method is summarized. Based on this literature and further argument a set of these statistics is proposed for the analysis. These statistics are used as explanatory variables in predicting the relative performance of the nine methods using a set of simulated time series with known properties. These results are evaluated on observed data sets, the M‐Competition data and Fildes Telecommunications data. The general conclusion is that the summary statistics can be used to select a good forecasting method (or set of methods) but not necessarily the best. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

16.
In this paper, we present a comparison between the forecasting performances of the normalization and variance stabilization method (NoVaS) and the GARCH(1,1), EGARCH(1,1) and GJR‐GARCH(1,1) models. Hence the aim of this study is to compare the out‐of‐sample forecasting performances of the models used throughout the study and to show that the NoVaS method is better than GARCH(1,1)‐type models in the context of out‐of sample forecasting performance. We study the out‐of‐sample forecasting performances of GARCH(1,1)‐type models and NoVaS method based on generalized error distribution, unlike normal and Student's t‐distribution. Also, what makes the study different is the use of the return series, calculated logarithmically and arithmetically in terms of forecasting performance. For comparing the out‐of‐sample forecasting performances, we focused on different datasets, such as S&P 500, logarithmic and arithmetic B?ST 100 return series. The key result of our analysis is that the NoVaS method performs better out‐of‐sample forecasting performance than GARCH(1,1)‐type models. The result can offer useful guidance in model building for out‐of‐sample forecasting purposes, aimed at improving forecasting accuracy.  相似文献   

17.
We propose a simple and flexible framework for forecasting the joint density of asset returns. The multinormal distribution is augmented with a polynomial in (time‐varying) non‐central co‐moments of assets. We estimate the coefficients of the polynomial via the method of moments for a carefully selected set of co‐moments. In an extensive empirical study, we compare the proposed model with a range of other models widely used in the literature. Employing a recently proposed as well as standard techniques to evaluate multivariate forecasts, we conclude that the augmented joint density provides highly accurate forecasts of the ‘negative tail’ of the joint distribution. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
This paper discusses the forecasting performance of alternative factor models based on a large panel of quarterly time series for the German economy. One model extracts factors by static principal components analysis; the second model is based on dynamic principal components obtained using frequency domain methods; the third model is based on subspace algorithms for state‐space models. Out‐of‐sample forecasts show that the forecast errors of the factor models are on average smaller than the errors of a simple autoregressive benchmark model. Among the factor models, the dynamic principal component model and the subspace factor model outperform the static factor model in most cases in terms of mean‐squared forecast error. However, the forecast performance depends crucially on the choice of appropriate information criteria for the auxiliary parameters of the models. In the case of misspecification, rankings of forecast performance can change severely. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper, we first extract factors from a monthly dataset of 130 macroeconomic and financial variables. These extracted factors are then used to construct a factor‐augmented qualitative vector autoregressive (FA‐Qual VAR) model to forecast industrial production growth, inflation, the Federal funds rate, and the term spread based on a pseudo out‐of‐sample recursive forecasting exercise over an out‐of‐sample period of 1980:1 to 2014:12, using an in‐sample period of 1960:1 to 1979:12. Short‐, medium‐, and long‐run horizons of 1, 6, 12, and 24 months ahead are considered. The forecast from the FA‐Qual VAR is compared with that of a standard VAR model, a Qual VAR model, and a factor‐augmented VAR (FAVAR). In general, we observe that the FA‐Qual VAR tends to perform significantly better than the VAR, Qual VAR and FAVAR (barring some exceptions relative to the latter). In addition, we find that the Qual VARs are also well equipped in forecasting probability of recessions when compared to probit models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号