首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
We utilize mixed‐frequency factor‐MIDAS models for the purpose of carrying out backcasting, nowcasting, and forecasting experiments using real‐time data. We also introduce a new real‐time Korean GDP dataset, which is the focus of our experiments. The methodology that we utilize involves first estimating common latent factors (i.e., diffusion indices) from 190 monthly macroeconomic and financial series using various estimation strategies. These factors are then included, along with standard variables measured at multiple different frequencies, in various factor‐MIDAS prediction models. Our key empirical findings as follows. (i) When using real‐time data, factor‐MIDAS prediction models outperform various linear benchmark models. Interestingly, the “MSFE‐best” MIDAS models contain no autoregressive (AR) lag terms when backcasting and nowcasting. AR terms only begin to play a role in “true” forecasting contexts. (ii) Models that utilize only one or two factors are “MSFE‐best” at all forecasting horizons, but not at any backcasting and nowcasting horizons. In these latter contexts, much more heavily parametrized models with many factors are preferred. (iii) Real‐time data are crucial for forecasting Korean gross domestic product, and the use of “first available” versus “most recent” data “strongly” affects model selection and performance. (iv) Recursively estimated models are almost always “MSFE‐best,” and models estimated using autoregressive interpolation dominate those estimated using other interpolation methods. (v) Factors estimated using recursive principal component estimation methods have more predictive content than those estimated using a variety of other (more sophisticated) approaches. This result is particularly prevalent for our “MSFE‐best” factor‐MIDAS models, across virtually all forecast horizons, estimation schemes, and data vintages that are analyzed.  相似文献   

2.
In recent years, factor models have received increasing attention from both econometricians and practitioners in the forecasting of macroeconomic variables. In this context, Bai and Ng (Journal of Econometrics 2008; 146 : 304–317) find an improvement in selecting indicators according to the forecast variable prior to factor estimation (targeted predictors). In particular, they propose using the LARS‐EN algorithm to remove irrelevant predictors. In this paper, we adapt the Bai and Ng procedure to a setup in which data releases are delayed and staggered. In the pre‐selection step, we replace actual data with estimates obtained on the basis of past information, where the structure of the available information replicates the one a forecaster would face in real time. We estimate on the reduced dataset the dynamic factor model of Giannone et al. (Journal of Monetary Economics 2008; 55 : 665–676) and Doz et al. (Journal of Econometrics 2011; 164 : 188–205), which is particularly suitable for the very short‐term forecast of GDP. A pseudo real‐time evaluation on French data shows the potential of our approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

3.
We use real‐time macroeconomic variables and combination forecasts with both time‐varying weights and equal weights to forecast inflation in the USA. The combination forecasts compare three sets of commonly used time‐varying coefficient autoregressive models: Gaussian distributed errors, errors with stochastic volatility, and errors with moving average stochastic volatility. Both point forecasts and density forecasts suggest that models combined by equal weights do not produce worse forecasts than those with time‐varying weights. We also find that variable selection, the allowance of time‐varying lag length choice, and the stochastic volatility specification significantly improve forecast performance over standard benchmarks. Finally, when compared with the Survey of Professional Forecasters, the results of the best combination model are found to be highly competitive during the 2007/08 financial crisis.  相似文献   

4.
A modeling approach to real‐time forecasting that allows for data revisions is shown. In this approach, an observed time series is decomposed into stochastic trend, data revision, and observation noise in real time. It is assumed that the stochastic trend is defined such that its first difference is specified as an AR model, and that the data revision, obtained only for the latest part of the time series, is also specified as an AR model. The proposed method is applicable to the data set with one vintage. Empirical applications to real‐time forecasting of quarterly time series of US real GDP and its eight components are shown to illustrate the usefulness of the proposed approach. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
6.
In multivariate time series, estimation of the covariance matrix of observation innovations plays an important role in forecasting as it enables computation of standardized forecast error vectors as well as the computation of confidence bounds of forecasts. We develop an online, non‐iterative Bayesian algorithm for estimation and forecasting. It is empirically found that, for a range of simulated time series, the proposed covariance estimator has good performance converging to the true values of the unknown observation covariance matrix. Over a simulated time series, the new method approximates the correct estimates, produced by a non‐sequential Monte Carlo simulation procedure, which is used here as the gold standard. The special, but important, vector autoregressive (VAR) and time‐varying VAR models are illustrated by considering London metal exchange data consisting of spot prices of aluminium, copper, lead and zinc. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
We propose a solution to select promising subsets of autoregressive time series models for further consideration which follows up on the idea of the stochastic search variable selection procedure in George and McCulloch (1993). It is based on a Bayesian approach which is unconditional on the initial terms. The autoregression stepup is in the form of a hierarchical normal mixture model, where latent variables are used to identify the subset choice. The framework of our procedure is utilized by the Gibbs sampler, a Markov chain Monte Carlo method. The advantage of the method presented is computational: it is an alternative way to search over a potentially large set of possible subsets. The proposed method is illustrated with a simulated data as well as a real data. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

8.
The state space model is widely used to handle time series data driven by related latent processes in many fields. In this article, we suggest a framework to examine the relationship between state space models and autoregressive integrated moving average (ARIMA) models by examining the existence and positive‐definiteness conditions implied by auto‐covariance structures. This study covers broad types of state space models frequently used in previous studies. We also suggest a simple statistical test to check whether a certain state space model is appropriate for the specific data. For illustration, we apply the suggested procedure in the analysis of the United States real gross domestic product data. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we forecast daily returns of crypto‐currencies using a wide variety of different econometric models. To capture salient features commonly observed in financial time series like rapid changes in the conditional variance, non‐normality of the measurement errors and sharply increasing trends, we develop a time‐varying parameter VAR with t‐distributed measurement errors and stochastic volatility. To control for overparametrization, we rely on the Bayesian literature on shrinkage priors, which enables us to shrink coefficients associated with irrelevant predictors and/or perform model specification in a flexible manner. Using around one year of daily data, we perform a real‐time forecasting exercise and investigate whether any of the proposed models is able to outperform the naive random walk benchmark. To assess the economic relevance of the forecasting gains produced by the proposed models we, moreover, run a simple trading exercise.  相似文献   

10.
Forecast combination based on a model selection approach is discussed and evaluated. In addition, a combination approach based on ex ante predictive ability is outlined. The model selection approach which we examine is based on the use of Schwarz (SIC) or the Akaike (AIC) Information Criteria. Monte Carlo experiments based on combination forecasts constructed using possibly (misspecified) models suggest that the SIC offers a potentially useful combination approach, and that further investigation is warranted. For example, combination forecasts from a simple averaging approach MSE‐dominate SIC combination forecasts less than 25% of the time in most cases, while other ‘standard’ combination approaches fare even worse. Alternative combination approaches are also compared by conducting forecasting experiments using nine US macroeconomic variables. In particular, artificial neural networks (ANN), linear models, and professional forecasts are used to form real‐time forecasts of the variables, and it is shown via a series of experiments that SIC, t‐statistic, and averaging combination approaches dominate various other combination approaches. An additional finding is that while ANN models may not MSE‐dominate simpler linear models, combinations of forecasts from these two models outperform either individual forecast, for a subset of the economic variables examined. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

11.
In recent years an impressive array of publications has appeared claiming considerable successes of neural networks in modelling financial data but sceptical practitioners and statisticians are still raising the question of whether neural networks really are ‘a major breakthrough or just a passing fad’. A major reason for this is the lack of procedures for performing tests for misspecified models, and tests of statistical significance for the various parameters that have been estimated, which makes it difficult to assess the model's significance and the possibility that any short‐term successes that are reported might be due to ‘data mining’. In this paper we describe a methodology for neural model identification which facilitates hypothesis testing at two levels: model adequacy and variable significance. The methodology includes a model selection procedure to produce consistent estimators, a variable selection procedure based on statistical significance and a model adequacy procedure based on residuals analysis. We propose a novel, computationally efficient scheme for estimating sampling variability of arbitrarily complex statistics for neural models and apply it to variable selection. The approach is based on sampling from the asymptotic distribution of the neural model's parameters (‘parametric sampling’). Controlled simulations are used for the analysis and evaluation of our model identification methodology. A case study in tactical asset allocation is used to demonstrate how the methodology can be applied to real‐life problems in a way analogous to stepwise forward regression analysis. Neural models are contrasted to multiple linear regression. The results indicate the presence of non‐linear relationships in modelling the equity premium. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

12.
This paper examines the forecasting ability of the nonlinear specifications of the market model. We propose a conditional two‐moment market model with a time‐varying systematic covariance (beta) risk in the form of a mean reverting process of the state‐space model via the Kalman filter algorithm. In addition, we account for the systematic component of co‐skewness and co‐kurtosis by considering higher moments. The analysis is implemented using data from the stock indices of several developed and emerging stock markets. The empirical findings favour the time‐varying market model approaches, which outperform linear model specifications both in terms of model fit and predictability. Precisely, higher moments are necessary for datasets that involve structural changes and/or market inefficiencies which are common in most of the emerging stock markets. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
Focusing on the interdependence of product categories we analyze multicategory buying decisions of households by a finite mixture of multivariate Tobit‐2 models with two response variables: purchase incidence and expenditure. Mixture components can be interpreted as household segments. Correlations for purchases of different categories turn out to be much more important than correlations among expenditures as well as correlations among purchases and expenditures of different categories. About 18% of all pairwise purchase correlations are significant. We compare the best‐performing large‐scale model with 28 categories to four small‐scale models each with seven categories. In our empirical study the large‐scale model clearly attains a better forecasting performance. The small‐scale models provide several biased correlations and miss about 50% of the significant correlations which the large scale model detects. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
This paper is concerned with modelling time series by single hidden layer feedforward neural network models. A coherent modelling strategy based on statistical inference is presented. Variable selection is carried out using simple existing techniques. The problem of selecting the number of hidden units is solved by sequentially applying Lagrange multiplier type tests, with the aim of avoiding the estimation of unidentified models. Misspecification tests are derived for evaluating an estimated neural network model. All the tests are entirely based on auxiliary regressions and are easily implemented. A small‐sample simulation experiment is carried out to show how the proposed modelling strategy works and how the misspecification tests behave in small samples. Two applications to real time series, one univariate and the other multivariate, are considered as well. Sets of one‐step‐ahead forecasts are constructed and forecast accuracy is compared with that of other nonlinear models applied to the same series. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

15.
A nonlinear geometric combination of statistical models is proposed as an alternative approach to the usual linear combination or mixture. Contrary to the linear, the geometric model is closed under the regular exponential family of distributions, as we show. As a consequence, the distribution which results from the combination is unimodal and a single location parameter can be chosen for decision making. In the case of Student t‐distributions (of particular interest in forecasting) the geometric combination can be unimodal under a sufficient condition we have established. A comparative analysis between the geometric and linear combinations of predictive distributions from three Bayesian regression dynamic linear models, in a case of beer sales forecasting in Zimbabwe, shows the geometric model to consistently outperform its linear counterpart as well as its component models. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

16.
Value‐at‐Risk (VaR) is widely used as a tool for measuring the market risk of asset portfolios. However, alternative VaR implementations are known to yield fairly different VaR forecasts. Hence, every use of VaR requires choosing among alternative forecasting models. This paper undertakes two case studies in model selection, for the S&P 500 index and India's NSE‐50 index, at the 95% and 99% levels. We employ a two‐stage model selection procedure. In the first stage we test a class of models for statistical accuracy. If multiple models survive rejection with the tests, we perform a second stage filtering of the surviving models using subjective loss functions. This two‐stage model selection procedure does prove to be useful in choosing a VaR model, while only incompletely addressing the problem. These case studies give us some evidence about the strengths and limitations of present knowledge on estimation and testing for VaR. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

17.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
In this paper, we examine the use of non‐parametric Neural Network Regression (NNR) and Recurrent Neural Network (RNN) regression models for forecasting and trading currency volatility, with an application to the GBP/USD and USD/JPY exchange rates. Both the results of the NNR and RNN models are benchmarked against the simpler GARCH alternative and implied volatility. Two simple model combinations are also analysed. The intuitively appealing idea of developing a nonlinear nonparametric approach to forecast FX volatility, identify mispriced options and subsequently develop a trading strategy based upon this process is implemented for the first time on a comprehensive basis. Using daily data from December 1993 through April 1999, we develop alternative FX volatility forecasting models. These models are then tested out‐of‐sample over the period April 1999–May 2000, not only in terms of forecasting accuracy, but also in terms of trading efficiency: in order to do so, we apply a realistic volatility trading strategy using FX option straddles once mispriced options have been identified. Allowing for transaction costs, most trading strategies retained produce positive returns. RNN models appear as the best single modelling approach yet, somewhat surprisingly, model combination which has the best overall performance in terms of forecasting accuracy, fails to improve the RNN‐based volatility trading results. Another conclusion from our results is that, for the period and currencies considered, the currency option market was inefficient and/or the pricing formulae applied by market participants were inadequate. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

19.
We introduce a new strategy for the prediction of linear temporal aggregates; we call it ‘hybrid’ and study its performance using asymptotic theory. This scheme consists of carrying out model parameter estimation with data sampled at the highest available frequency and the subsequent prediction with data and models aggregated according to the forecasting horizon of interest. We develop explicit expressions that approximately quantify the mean square forecasting errors associated with the different prediction schemes and that take into account the estimation error component. These approximate estimates indicate that the hybrid forecasting scheme tends to outperform the so‐called ‘all‐aggregated’ approach and, in some instances, the ‘all‐disaggregated’ strategy that is known to be optimal when model selection and estimation errors are neglected. Unlike other related approximate formulas existing in the literature, those proposed in this paper are totally explicit and require neither assumptions on the second‐order stationarity of the sample nor Monte Carlo simulations for their evaluation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper we investigate the impact of data revisions on forecasting and model selection procedures. A linear ARMA model and nonlinear SETAR model are considered in this study. Two Canadian macroeconomic time series have been analyzed: the real‐time monetary aggregate M3 (1977–2000) and residential mortgage credit (1975–1998). The forecasting method we use is multi‐step‐ahead non‐adaptive forecasting. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号