首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper develops and estimates a dynamic factor model in which estimates for unobserved monthly US Gross Domestic Product (GDP) are consistent with observed quarterly data. In contrast to existing approaches, the quarterly averages of our monthly estimates are exactly equal to the Bureau of Economic Analysis (BEA) quarterly estimates. The relationship between our monthly estimates and the quarterly data is therefore the same as the relationship between quarterly and annual data. The study makes use of Bayesian Markov chain Monte Carlo and data augmentation techniques to simulate values for the logarithms on monthly US GDP. The imposition of the exact linear quarterly constraint produces a non‐standard distribution, necessitating the implementation of a Metropolis simulation step in the estimation. Our methodology can be easily generalized to cases where the variable of interest is monthly GDP and in such a way that the final results incorporate the statistical uncertainty associated with the monthly GDP estimates. We provide an example by incorporating our monthly estimates into a Markov switching model of the US business cycle. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
This paper proposes a mixed‐frequency error correction model for possibly cointegrated non‐stationary time series sampled at different frequencies. We highlight the impact, in terms of model specification, of the choice of the particular high‐frequency explanatory variable to be included in the cointegrating relationship, which we call a dynamic mixed‐frequency cointegrating relationship. The forecasting performance of aggregated models and several mixed‐frequency regressions are compared in a set of Monte Carlo experiments. In particular, we look at both the unrestricted mixed‐frequency model and at a more parsimonious MIDAS regression. Whereas the existing literature has only investigated the potential improvements of the MIDAS framework for stationary time series, our study emphasizes the need to include the relevant cointegrating vectors in the non‐stationary case. Furthermore, it is illustrated that the choice of dynamic mixed‐frequency cointegrating relationship does not matter as long as the short‐run dynamics are adapted accordingly. Finally, the unrestricted model is shown to suffer from parameter proliferation for samples of relatively small size, whereas MIDAS forecasts are robust to over‐parameterization. We illustrate our results for the US inflation rate. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

3.
Use of monthly data for economic forecasting purposes is typically constrained by the absence of monthly estimates of GDP. Such data can be interpolated but are then prone to measurement error. However, the variance matrix of the measurement errors is typically known. We present a technique for estimating a VAR on monthly data, making use of interpolated estimates of GDP and correcting for the impact of measurement error. We then address the question how to establish whether the model estimated from the interpolated monthly data contains information absent from the analogous quarterly VAR. The techniques are illustrated using a bivariate VAR modelling GDP growth and inflation. It is found that, using inflation data adjusted to remove seasonal effects and the impacts of changes to indirect taxes, the monthly model has little to add to a quarterly model when projecting one quarter ahead. However, the monthly model has an important role to play in building up a picture of the current quarter once one or two months' hard data becomes available. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

4.
A combination of VAR estimation and state space model reduction techniques are examined by Monte Carlo methods in order to find good, simple to use, procedures for determining models which have reasonable prediction properties. The presentation is largely graphical. This helps focus attention on the aspects of the model determination problem which are relatively important for forecasting. One surprising result is that, for prediction purposes, knowledge of the true structure of the model generating the data is not particularly useful unless parameter values are also known. This is because the difficulty in estimating parameters of the true model causes more prediction error than results from a more parsimonious approximate model.  相似文献   

5.
We introduce a long‐memory dynamic Tobit model, defining it as a censored version of a fractionally integrated Gaussian ARMA model, which may include seasonal components and/or additional regression variables. Parameter estimation for such a model using standard techniques is typically infeasible, since the model is not Markovian, cannot be expressed in a finite‐dimensional state‐space form, and includes censored observations. Furthermore, the long‐memory property renders a standard Gibbs sampling scheme impractical. Therefore we introduce a new Markov chain Monte Carlo sampling scheme, which is orders of magnitude more efficient than the standard Gibbs sampler. The method is inherently capable of handling missing observations. In case studies, the model is fit to two time series: one consisting of volumes of requests to a hard disk over time, and the other consisting of hourly rainfall measurements in Edinburgh over a 2‐year period. The resulting posterior distributions for the fractional differencing parameter demonstrate, for these two time series, the importance of the long‐memory structure in the models. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

6.
Half‐life estimation has been widely used to evaluate the speed of mean reversion for various economic and financial variables. However, half‐life estimation for the same variable are often different due to the length of the annual time series data used in alternative studies. To solve this issue, this paper extends the ARMA model and derives the half‐life estimation formula for high‐frequency monthly data. Our results indicate that half‐life estimation using short‐period monthly data is an effective approximation for that using long‐period annual data. Furthermore, by applying high‐frequency data, the required effective sample size can be reduced by at least 40% at the 95% confidence level. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
Forecasters commonly predict real gross domestic product growth from monthly indicators such as industrial production, retail sales and surveys, and therefore require an assessment of the reliability of such tools. While forecast errors related to model specification and unavailability of data in real time have been assessed, the impact of data revisions on forecast accuracy has seldom been evaluated, especially for the euro area. This paper proposes to evaluate the contributions of these three sources of forecast error using a set of data vintages for the euro area. The results show that gains in accuracy of forecasts achieved by using monthly data on actual activity rather than surveys or financial indicators are offset by the fact that the former set of monthly data is harder to forecast and less timely than the latter set. These results provide a benchmark which future research may build on as more vintage datasets become available. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
This paper proposes a robust multivariate threshold vector autoregressive model with generalized autoregressive conditional heteroskedasticities and dynamic conditional correlations to describe conditional mean, volatility and correlation asymmetries in financial markets. In addition, the threshold variable for regime switching is formulated as a weighted average of endogenous variables to eliminate excessively subjective belief in the threshold variable decision and to serve as the proxy in deciding which market should be the price leader. The estimation is performed using Markov chain Monte Carlo methods. Furthermore, several meaningful criteria are introduced to assess the forecasting performance in the conditional covariance matrix. The proposed methodology is illustrated using daily S&P500 futures and spot prices. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
This paper investigates the trade‐off between timeliness and quality in nowcasting practices. This trade‐off arises when the frequency of the variable to be nowcast, such as gross domestic product (GDP), is quarterly, while that of the underlying panel data is monthly; and the latter contains both survey and macroeconomic data. These two categories of data have different properties regarding timeliness and quality: the survey data are timely available (but might possess less predictive power), while the macroeconomic data possess more predictive power (but are not timely available because of their publication lags). In our empirical analysis, we use a modified dynamic factor model which takes three refinements for the standard dynamic factor model of Stock and Watson (Journal of Business and Economic Statistics, 2002, 20, 147–162) into account, namely mixed frequency, preselections and cointegration among the economic variables. Our main finding from a historical nowcasting simulation based on euro area GDP is that the predictive power of the survey data depends on the economic circumstances; namely, that survey data are more useful in tranquil times, and less so in times of turmoil.  相似文献   

10.
This paper shows that out‐of‐sample forecast comparisons can help prevent data mining‐induced overfitting. The basic results are drawn from simulations of a simple Monte Carlo design and a real data‐based design similar to those used in some previous studies. In each simulation, a general‐to‐specific procedure is used to arrive at a model. If the selected specification includes any of the candidate explanatory variables, forecasts from the model are compared to forecasts from a benchmark model that is nested within the selected model. In particular, the competing forecasts are tested for equal MSE and encompassing. The simulations indicate most of the post‐sample tests are roughly correctly sized. Moreover, the tests have relatively good power, although some are consistently more powerful than others. The paper concludes with an application, modelling quarterly US inflation. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
A new clustered correlation multivariate generalized autoregressive conditional heteroskedasticity (CC‐MGARCH) model that allows conditional correlations to form clusters is proposed. This model generalizes the time‐varying correlation structure of Tse and Tsui (2002, Journal of Business and Economic Statistics 20 : 351–361) by classifying the correlations among the series into groups. To estimate the proposed model, Markov chain Monte Carlo methods are adopted. Two efficient sampling schemes for drawing discrete indicators are also developed. Simulations show that these efficient sampling schemes can lead to substantial savings in computation time in Monte Carlo procedures involving discrete indicators. Empirical examples using stock market and exchange rate data are presented in which two‐cluster and three‐cluster models are selected using posterior probabilities. This implies that the conditional correlation equation is likely to be governed by more than one set of decaying parameters. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
We compare linear autoregressive (AR) models and self‐exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two‐regime SETAR process is used as the data‐generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non‐linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

13.
This paper assesses the informational content of alternative realized volatility estimators, daily range and implied volatility in multi‐period out‐of‐sample Value‐at‐Risk (VaR) predictions. We use the recently proposed Realized GARCH model combined with the skewed Student's t distribution for the innovations process and a Monte Carlo simulation approach in order to produce the multi‐period VaR estimates. Our empirical findings, based on the S&P 500 stock index, indicate that almost all realized and implied volatility measures can produce statistically and regulatory precise VaR forecasts across forecasting horizons, with the implied volatility being especially accurate in monthly VaR forecasts. The daily range produces inferior forecasting results in terms of regulatory accuracy and Basel II compliance. However, robust realized volatility measures, which are immune against microstructure noise bias or price jumps, generate superior VaR estimates in terms of capital efficiency, as they minimize the opportunity cost of capital and the Basel II regulatory capital. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
The estimation of hurricane intensity evolution in some tropical and subtropical areas is a challenging problem. Indeed, the prevention and the quantification of possible damage provoked by destructive hurricanes are directly linked to this kind of prevision. For this purpose, hurricane derivatives have been recently issued by the Chicago Mercantile Exchange, based on the so‐called Carvill hurricane index. In our paper, we adopt a parametric homogeneous semi‐Markov approach. This model assumes that the lifespan of a hurricane can be described as a semi‐Markov process and also it allows the more realistic assumption of time event dependence to be taken into consideration. The elapsed time between two consecutive events (waiting time distributions) is modeled through a best‐fitting procedure on empirical data. We then determine the transition probabilities and so‐called crossing states probabilities. We conclude with a Monte Carlo simulation and the model is validated through a large database containing real data coming from HURDAT. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

15.
This paper studies in‐sample and out‐of‐sample tests for Granger causality using Monte Carlo simulation. The results show that the out‐of‐sample tests may be more powerful than the in‐sample tests when discrete structural breaks appear in time series data. Further, an empirical example investigating Taiwan's investment–saving relationship shows that Taiwan's domestic savings may be helpful in predicting domestic investments. It further illustrates that a possible Granger causal relationship is detected by out‐of‐sample tests while the in‐sample test fails to reject the null of non‐causality. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
The paper presents a comparative real‐time analysis of alternative indirect estimates relative to monthly euro area employment. In the experiment quarterly employment is temporally disaggregated using monthly unemployment as related series. The strategies under comparison make use of the contribution of sectoral data of the euro area and its six larger member states. The comparison is carried out among univariate temporal disaggregations of the Chow and Lin type and multivariate structural time series models of small and medium size. Specifications in logarithms are also systematically assessed. All multivariate set‐ups, up to 49 series modelled simultaneously, are estimated via the EM algorithm. Main conclusions are that mean revision errors of disaggregated estimates are overall small, a gain is obtained when the model strategy takes into account the information by both sector and member state and that larger multivariate set‐ups perform very well, with several advantages with respect to simpler models.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
This paper examines the problem of forecasting macro‐variables which are observed monthly (or quarterly) and result from geographical and sectorial aggregation. The aim is to formulate a methodology whereby all relevant information gathered in this context could provide more accurate forecasts, be frequently updated, and include a disaggregated explanation as useful information for decision‐making. The appropriate treatment of the resulting disaggregated data set requires vector modelling, which captures the long‐run restrictions between the different time series and the short‐term correlations existing between their stationary transformations. Frequently, due to a lack of degrees of freedom, the vector model must be restricted to a block‐diagonal vector model. This methodology is applied in this paper to inflation in the euro area, and shows that disaggregated models with cointegration restrictions improve accuracy in forecasting aggregate macro‐variables. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

18.
We investigate the optimal structure of dynamic regression models used in multivariate time series prediction and propose a scheme to form the lagged variable structure called Backward‐in‐Time Selection (BTS), which takes into account feedback and multicollinearity, often present in multivariate time series. We compare BTS to other known methods, also in conjunction with regularization techniques used for the estimation of model parameters, namely principal components, partial least squares and ridge regression estimation. The predictive efficiency of the different models is assessed by means of Monte Carlo simulations for different settings of feedback and multicollinearity. The results show that BTS has consistently good prediction performance, while other popular methods have varying and often inferior performance. The prediction performance of BTS was also found the best when tested on human electroencephalograms of an epileptic seizure, and for the prediction of returns of indices of world financial markets.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
In their seminal book Time Series Analysis: Forecasting and Control, Box and Jenkins (1976) introduce the Airline model, which is still routinely used for the modelling of economic seasonal time series. The Airline model is for a differenced time series (in levels and seasons) and constitutes a linear moving average of lagged Gaussian disturbances which depends on two coefficients and a fixed variance. In this paper a novel approach to seasonal adjustment is developed that is based on the Airline model and that accounts for outliers and breaks in time series. For this purpose we consider the canonical representation of the Airline model. It takes the model as a sum of trend, seasonal and irregular (unobserved) components which are uniquely identified as a result of the canonical decomposition. The resulting unobserved components time series model is extended by components that allow for outliers and breaks. When all components depend on Gaussian disturbances, the model can be cast in state space form and the Kalman filter can compute the exact log‐likelihood function. Related filtering and smoothing algorithms can be used to compute minimum mean squared error estimates of the unobserved components. However, the outlier and break components typically rely on heavy‐tailed densities such as the t or the mixture of normals. For this class of non‐Gaussian models, Monte Carlo simulation techniques will be used for estimation, signal extraction and seasonal adjustment. This robust approach to seasonal adjustment allows outliers to be accounted for, while keeping the underlying structures that are currently used to aid reporting of economic time series data. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
We propose a solution to select promising subsets of autoregressive time series models for further consideration which follows up on the idea of the stochastic search variable selection procedure in George and McCulloch (1993). It is based on a Bayesian approach which is unconditional on the initial terms. The autoregression stepup is in the form of a hierarchical normal mixture model, where latent variables are used to identify the subset choice. The framework of our procedure is utilized by the Gibbs sampler, a Markov chain Monte Carlo method. The advantage of the method presented is computational: it is an alternative way to search over a potentially large set of possible subsets. The proposed method is illustrated with a simulated data as well as a real data. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号