首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Is there a common model inherent in macroeconomic data? Macroeconomic theory suggests that market economies of various nations should share many similar dynamic patterns; as a result, individual country empirical models, for a wide variety of countries, often include the same variables. Yet, empirical studies often find important roles for idiosyncratic shocks in the differing macroeconomic performance of countries. We use forecasting criteria to examine the macrodynamic behaviour of 15 OECD countries in terms of a small set of familiar, widely used core economic variables, omitting country‐specific shocks. We find this small set of variables and a simple VAR ‘common model’ strongly support the hypothesis that many industrialized nations have similar macroeconomic dynamics. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
This paper introduces a novel generalized autoregressive conditional heteroskedasticity–mixed data sampling–extreme shocks (GARCH-MIDAS-ES) model for stock volatility to examine whether the importance of extreme shocks changes in different time ranges. Based on different combinations of the short- and long-term effects caused by extreme events, we extend the standard GARCH-MIDAS model to characterize the different responses of the stock market for short- and long-term horizons, separately or in combination. The unique timespan of nearly 100 years of the Dow Jones Industrial Average (DJIA) daily returns allows us to understand the stock market volatility under extreme shocks from a historical perspective. The in-sample empirical results clearly show that the DJIA stock volatility is best fitted to the GARCH-MIDAS-SLES model by including the short- and long-term impacts of extreme shocks for all forecasting horizons. The out-of-sample results and robustness tests emphasize the significance of decomposing the effect of extreme shocks into short- and long-term effects to improve the accuracy of the DJIA volatility forecasts.  相似文献   

3.
This paper focuses on the effects of disaggregation on forecast accuracy for nonstationary time series using dynamic factor models. We compare the forecasts obtained directly from the aggregated series based on its univariate model with the aggregation of the forecasts obtained for each component of the aggregate. Within this framework (first obtain the forecasts for the component series and then aggregate the forecasts), we try two different approaches: (i) generate forecasts from the multivariate dynamic factor model and (ii) generate the forecasts from univariate models for each component of the aggregate. In this regard, we provide analytical conditions for the equality of forecasts. The results are applied to quarterly gross domestic product (GDP) data of several European countries of the euro area and to their aggregated GDP. This will be compared to the prediction obtained directly from modeling and forecasting the aggregate GDP of these European countries. In particular, we would like to check whether long‐run relationships between the levels of the components are useful for improving the forecasting accuracy of the aggregate growth rate. We will make forecasts at the country level and then pool them to obtain the forecast of the aggregate. The empirical analysis suggests that forecasts built by aggregating the country‐specific models are more accurate than forecasts constructed using the aggregated data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
Data revisions and selections of appropriate forwarding‐looking variables have a major impact on true identification of news shocks and quality of research findings derived from structural vector autoregression (SVAR) estimation. This paper revisits news shocks to identify the role of different vintages of total factor productivity (TFP) series and term structure of interest rates as major prognosticators of future economic growth. There is a growing strand of literature regarding the use of utilization‐adjusted TFP series, provided by Fernald (Federal Reserve Bank of San Francisco, Working Paper Series, 2014) for identification of news shocks. We reestimate Barsky and Sims' (Journal of Monetary Economics, 2011, 58, 273–289) empirical analysis by employing 2007 and 2015 vintages of TFP data. We find substantial quantitative as well as qualitative differences among impulse response functions when using 2007 and 2015 vintages of TFP data. Output and hours initially decline, followed by quick reversal of both variables. In sharp contrast to results achieved by the 2007 vintage of TFP data, results achieved by the 2015 vintage of TFP data depict that output and hours will increase in response to positive TFP shock. By including term structure data in our VAR specification, total surprise technology shock and news shock account for 97% and 92% of the forecast error variance in total TFP and total output respectively. We find that revisions in TFP series over time ultimately impact the conclusion regarding news shocks on business cycles. Our results support the notion that term structure data help in better identification of news shock as compared to other forward‐looking variables.  相似文献   

5.
Computable general equilibrium (CGE) models are widely used as an advanced tool to evaluate alternative economic strategies and policy measures. These models are well rooted in solid economic theory, yet a crucial question is hardly asked: how well do these models perform? We address this question by comparing the economic performance of the Spanish economy in 1988 with the simulation results drawn from a CGE model calibrated with a 1987 Social Accounting Matrix. The values of endogenous variables used in the comparison are the equilibrium values provided by the model after updating the values of exogenous variables such as labour and capital endowments, real exports and effective nominal exchange rates with the European Community and the rest of the world, real government expenditures, and various tax rates, government subsidies, and transfers. The comparison shows that the model captures adequately the major developments that occurred in the Spanish economy in 1988. This result increases our confidence in the quantitative estimates derived from the model in the usual simulation exercises.  相似文献   

6.
It is well understood that the standard formulation for the variance of a regression‐model forecast produces interval estimates that are too narrow, principally because it ignores regressor forecast error. While the theoretical problem has been addressed, there has not been an adequate explanation of the effect of regressor forecast error, and the empirical literature has supplied a disparate variety of bits and pieces of evidence. Most business‐forecasting software programs continue to supply only the standard formulation. This paper extends existing analysis to derive and evaluate large‐sample approximations for the forecast error variance in a single‐equation regression model. We show how these approximations substantially clarify the expected effects of regressor forecast error. We then present a case study, which (a) demonstrates how rolling out‐of‐sample evaluations can be applied to obtain empirical estimates of the forecast error variance, (b) shows that these estimates are consistent with our large‐sample approximations and (c) illustrates, for ‘typical’ data, how seriously the standard formulation can understate the forecast error variance. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

7.
We propose a new class of limited information estimators built upon an explicit trade‐off between data fitting and a priori model specification. The estimators offer the researcher a continuum of estimators that range from an extreme emphasis on data fitting and robust reduced‐form estimation to the other extreme of exact model specification and efficient estimation. The approach used to generate the estimators illustrates why ULS often outperforms 2SLS‐PRRF even in the context of a correctly specified model, provides a new interpretation of 2SLS, and integrates Wonnacott and Wonnacott's (1970) least weighted variance estimators with other techniques. We apply the new class of estimators to Klein's Model I and generate forecasts. We find for this example that an emphasis on specification (as opposed to data fitting) produces better out‐of‐sample predictions. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper we develop a latent structure extension of a commonly used structural time series model and use the model as a basis for forecasting. Each unobserved regime has its own unique slope and variances to describe the process generating the data, and at any given time period the model predicts a priori which regime best characterizes the data. This is accomplished by using a multinomial logit model in which the primary explanatory variable is a measure of how consistent each regime has been with recent observations. The model is especially well suited to forecasting series which are subject to frequent and/or major shocks. An application to nominal interest rates shows that the behaviour of the three‐month US Treasury bill rate is adequately explained by three regimes. The forecasting accuracy is superior to that produced by a traditional single‐regime model and a standard ARIMA model with a conditionally heteroscedastic error. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
This paper provides an account of mid-level models which calibrate highly theoretical agent-based models of scientific communities by incorporating empirical information from real-world systems. As a result, these models more closely correspond with real-world communities, and are better suited for informing policy decisions than extant how-possibly models. I provide an exemplar of a mid-level model of science funding allocation that incorporates bibliometric data from scientific publications and data generated from empirical studies of peer review into an epistemic landscape model. The results of my model show that on a dynamic epistemic landscape, allocating funding by modified and pure lottery strategies performs comparably to a perfect selection funding allocation strategy. These results support the idea that introducing randomness into a funding allocation process may be a tractable policy worth exploring further through pilot studies. My exemplar shows that agent-based models need not be restricted to the abstract and the apriori; they can also be informed by empirical data.  相似文献   

10.
This paper examines the forecasting ability of the nonlinear specifications of the market model. We propose a conditional two‐moment market model with a time‐varying systematic covariance (beta) risk in the form of a mean reverting process of the state‐space model via the Kalman filter algorithm. In addition, we account for the systematic component of co‐skewness and co‐kurtosis by considering higher moments. The analysis is implemented using data from the stock indices of several developed and emerging stock markets. The empirical findings favour the time‐varying market model approaches, which outperform linear model specifications both in terms of model fit and predictability. Precisely, higher moments are necessary for datasets that involve structural changes and/or market inefficiencies which are common in most of the emerging stock markets. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
Recently developed structural models of the global crude oil market imply that the surge in the real price of oil between mid 2003 and mid 2008 was driven by repeated positive shocks to the demand for all industrial commodities, reflecting unexpectedly high growth mainly in emerging Asia. We evaluate this proposition using an alternative data source and a different econometric methodology. Rather than inferring demand shocks from an econometric model, we utilize a direct measure of global demand shocks based on revisions of professional real gross domestic product (GDP) growth forecasts. We show that forecast surprises during 2003–2008 were associated primarily with unexpected growth in emerging economies (in conjunction with much smaller positive GDP‐weighted forecast surprises in the major industrialized economies), that markets were repeatedly surprised by the strength of this growth, that these surprises were associated with a hump‐shaped response of the real price of oil that reaches its peak after 12–16 months, and that news about global growth predict much of the surge in the real price of oil from mid 2003 until mid 2008 and much of its subsequent decline. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
The estimation of hurricane intensity evolution in some tropical and subtropical areas is a challenging problem. Indeed, the prevention and the quantification of possible damage provoked by destructive hurricanes are directly linked to this kind of prevision. For this purpose, hurricane derivatives have been recently issued by the Chicago Mercantile Exchange, based on the so‐called Carvill hurricane index. In our paper, we adopt a parametric homogeneous semi‐Markov approach. This model assumes that the lifespan of a hurricane can be described as a semi‐Markov process and also it allows the more realistic assumption of time event dependence to be taken into consideration. The elapsed time between two consecutive events (waiting time distributions) is modeled through a best‐fitting procedure on empirical data. We then determine the transition probabilities and so‐called crossing states probabilities. We conclude with a Monte Carlo simulation and the model is validated through a large database containing real data coming from HURDAT. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
This paper first shows that survey‐based expectations (SBE) outperform standard time series models in US quarterly inflation out‐of‐sample prediction and that the term structure of survey‐based inflation forecasts has predictive power over the path of future inflation changes. It then proposes some empirical explanations for the forecasting success of survey‐based inflation expectations. We show that SBE pool a large amount of heterogeneous information on inflation expectations and react more flexibly and accurately to macro conditions both contemporaneously and dynamically. We illustrate the flexibility of SBE forecasts in the context of the 2008 financial crisis. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
Standard measures of prices are often contaminated by transitory shocks. This has prompted economists to suggest the use of measures of underlying inflation to formulate monetary policy and assist in forecasting observed inflation. Recent work has concentrated on modelling large data sets using factor models. In this paper we estimate factors from data sets of disaggregated price indices for European countries. We then assess the forecasting ability of these factor estimates against other measures of underlying inflation built from more traditional methods. The power to forecast headline inflation over horizons of 12 to 18 months is adopted as a valid criterion to assess forecasting. Empirical results for the five largest euro area countries, as well as for the euro area itself, are presented. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

15.
‘Bayesian forecasting’ is a time series method of forecasting which (in the United Kingdom) has become synonymous with the state space formulation of Harrison and Stevens (1976). The approach is distinct from other time series methods in that it envisages changes in model structure. A disjoint class of models is chosen to encompass the changes. Each data point is retrospectively evaluated (using Bayes theorem) to judge which of the models held. Forecasts are then derived conditional on an assumed model holding true. The final forecasts are weighted sums of these conditional forecasts. Few empirical evaluations have been carried out. This paper reports a large scale comparison of time series forecasting methods including the Bayesian. The approach is two fold: a simulation study to examine parameter sensitivity and an empirical study which contrasts Bayesian with other time series methods.  相似文献   

16.
Adaptive exponential smoothing methods allow a smoothing parameter to change over time, in order to adapt to changes in the characteristics of the time series. However, these methods have tended to produce unstable forecasts and have performed poorly in empirical studies. This paper presents a new adaptive method, which enables a smoothing parameter to be modelled as a logistic function of a user‐specified variable. The approach is analogous to that used to model the time‐varying parameter in smooth transition models. Using simulated data, we show that the new approach has the potential to outperform existing adaptive methods and constant parameter methods when the estimation and evaluation samples both contain a level shift or both contain an outlier. An empirical study, using the monthly time series from the M3‐Competition, gave encouraging results for the new approach. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
China is a populous country that is facing serious aging problems due to the single‐child birth policy. Debate is ongoing whether the liberalization of the single‐child policy to a two‐child policy can mitigate China's aging problems without unacceptably increasing the population. The purpose of this paper is to apply machine learning theory to the demographic field and project China's population structure under different fertility policies. The population data employed derive from the fifth and sixth national census records obtained in 2000 and 2010 in addition to the annals published by the China National Bureau of Statistics. Firstly, the sex ratio at birth is estimated according to the total fertility rate based on least squares regression of time series data. Secondly, the age‐specific fertility rates and age‐specific male/female mortality rates are projected by a least squares support vector machine (LS‐SVM) model, which then serve as the input to a Leslie matrix model. Finally, the male/female age‐specific population data projected by the Leslie matrix in a given year serve as the input parameters of the Leslie matrix for the following year, and the process is iterated in this manner until reaching the target year. The experimental results reveal that the proposed LS‐SVM‐Leslie model improves the projection accuracy relative to the conventional Leslie matrix model in terms of the percentage error and mean algebraic percentage error. The results indicate that the total fertility ratio should be controlled to around 2.0 to balance concerns associated with a large population with concerns associated with an aging population. Therefore, the two‐child birth policy should be fully instituted in China. However, the fertility desire of women tends to be low due to the high cost of living and the pressure associated with employment, particularly in the metropolitan areas. Thus additional policies should be implemented to encourage fertility.  相似文献   

18.
This article compares the forecast accuracy of different methods, namely prediction markets, tipsters and betting odds, and assesses the ability of prediction markets and tipsters to generate profits systematically in a betting market. We present the results of an empirical study that uses data from 678–837 games of three seasons of the German premier soccer league. Prediction markets and betting odds perform equally well in terms of forecasting accuracy, but both methods strongly outperform tipsters. A weighting‐based combination of the forecasts of these methods leads to a slightly higher forecast accuracy, whereas a rule‐based combination improves forecast accuracy substantially. However, none of the forecasts leads to systematic monetary gains in betting markets because of the high fees (25%) charged by the state‐owned bookmaker in Germany. Lower fees (e.g., approximately 12% or 0%) would provide systematic profits if punters exploited the information from prediction markets and bet only on a selected number of games. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
This paper introduces discrete Euler processes and shows their application in detecting and forecasting cycles in non‐stationary data where periodic behavior changes approximately linearly in time. A discrete Euler process becomes a classical stationary process if ‘time’ is transformed properly. By moving from one time domain to another, one may deform certain time‐varying data to non‐time‐varying data. With these non‐time‐varying data on the deformed timescale, one may use traditional tools to do parameter estimation and forecasts. The obtained results then can be transformed back to the original timescale. For datasets with an underlying discrete Euler process, the sample M‐spectrum and the spectra estimator of a Euler model (i.e., EAR spectral) are used to detect cycles of a Euler process. Beam response and whale data are used to demonstrate the usefulness of a Euler model. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper, we apply Bayesian inference to model and forecast intraday trading volume, using autoregressive conditional volume (ACV) models, and we evaluate the quality of volume point forecasts. In the empirical application, we focus on the analysis of both in‐ and out‐of‐sample performance of Bayesian ACV models estimated for 2‐minute trading volume data for stocks quoted on the Warsaw Stock Exchange in Poland. We calculate two types of point forecasts, using either expected values or medians of predictive distributions. We conclude that, in general, all considered models generate significantly biased forecasts. We also observe that the considered models significantly outperform such benchmarks as the naïve or rolling means forecasts. Moreover, in terms of root mean squared forecast errors, point predictions obtained within the ACV model with exponential distribution emerge superior compared to those calculated in structures with more general innovation distributions, although in many cases this characteristic turns out to be statistically insignificant. On the other hand, when comparing mean absolute forecast errors, the median forecasts obtained within the ACV models with Burr and generalized gamma distribution are found to be statistically better than other forecasts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号