首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 591 毫秒
1.
This article develops a new method for detrending time series. It is shown how, in a Bayesian framework, a generalized version of the Hodrick–Prescott filter is obtained by specifying prior densities on the signal‐to‐noise ratio (q) in the underlying unobserved components model. This helps ensure an appropriate degree of smoothness in the estimated trend while allowing for uncertainty in q. The article discusses the important issue of prior elicitation for time series recorded at different frequencies. By combining prior expectations with the likelihood, the Bayesian approach permits detrending in a way that is more consistent with the properties of the series. The method is illustrated with some quarterly and annual US macroeconomic series. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
Socioeconomic status is commonly conceptualized as the social standing or well‐being of an individual or society. Higher socioeconomic status has long been identified as a contributing factor for mortality improvement. This paper studies the impact of macroeconomic fluctuations (having gross domestic product (GDP) as a proxy) on mortality for the nine most populous eurozone countries. Based on the statistical analysis between the time‐dependent indicator of the Lee and Carter (Journal of the American Statistical Association, 1992, 87(419), 659–671) model and GDP, and adaptation of the good features of the O'Hare and Li (Insurance: Mathematics and Economics, 2012, 50, 12–25) model, a new mortality model including this additional economic‐related factor is proposed. Results for male and female from ages between 0 and 89, and similar for unisex data, are provided. This new model shows a better fitting and more plausible forecast among a significant number of eurozone countries. An in‐depth analysis of our findings is provided to give a better understanding of the relationship between mortality and GDP fluctuations.  相似文献   

3.
This paper derives the best linear unbiased prediction (BLUP) for an unbalanced panel data model. Starting with a simple error component regression model with unbalanced panel data and random effects, it generalizes the BLUP derived by Taub (Journal of Econometrics, 1979, 10, 103–108) to unbalanced panels. Next it derives the BLUP for an unequally spaced panel data model with serial correlation of the AR(1) type in the remainder disturbances considered by Baltagi and Wu (Econometric Theory, 1999, 15, 814–823). This in turn extends the BLUP for a panel data model with AR(1) type remainder disturbances derived by Baltagi and Li (Journal of Forecasting, 1992, 11, 561–567) from the balanced to the unequally spaced panel data case. The derivations are easily implemented and reduce to tractable expressions using an extension of the Fuller and Battese (Journal of Econometrics, 1974, 2, 67–78) transformation from the balanced to the unbalanced panel data case.  相似文献   

4.
This paper develops a New‐Keynesian Dynamic Stochastic General Equilibrium (NKDSGE) model for forecasting the growth rate of output, inflation, and the nominal short‐term interest rate (91 days Treasury Bill rate) for the South African economy. The model is estimated via maximum likelihood technique for quarterly data over the period of 1970:1–2000:4. Based on a recursive estimation using the Kalman filter algorithm, out‐of‐sample forecasts from the NKDSGE model are compared with forecasts generated from the classical and Bayesian variants of vector autoregression (VAR) models for the period 2001:1–2006:4. The results indicate that in terms of out‐of‐sample forecasting, the NKDSGE model outperforms both the classical and Bayesian VARs for inflation, but not for output growth and nominal short‐term interest rate. However, differences in RMSEs are not significant across the models. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
This article studies Man and Tiao's (2006) low‐order autoregressive fractionally integrated moving‐average (ARFIMA) approximation to Tsai and Chan's (2005b) limiting aggregate structure of the long‐memory process. In matching the autocorrelations, we demonstrate that the approximation works well, especially for larger d values. In computing autocorrelations over long lags for larger d value, using the exact formula one might encounter numerical problems. The use of the ARFIMA(0, d, d?1) model provides a useful alternative to compute the autocorrelations as a really close approximation. In forecasting future aggregates, we demonstrate the close performance of using the ARFIMA(0, d, d?1) model and the exact aggregate structure. In practice, this provides a justification for the use of a low‐order ARFIMA model in predicting future aggregates of long‐memory process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
This paper is concerned with the determination of simultaneous confidence regions for some types of time series models. We derive recursive formulas which allow the determination of the probability for an AR(1) stationary process based on exponential inputs to lie under any sequence of constants during N steps. Also, probabilities of the same form are derived for an MA(1) process, based on an exponentially distributed white noise. Numerical results are obtained and comparison of prediction regions for different values of ϕ or θ is made. The results show how the use of the correlation structure of the models can reduce the confidence regions area. © 1997 by John Wiley & Sons, Ltd.  相似文献   

7.
In recent years, factor models have received increasing attention from both econometricians and practitioners in the forecasting of macroeconomic variables. In this context, Bai and Ng (Journal of Econometrics 2008; 146 : 304–317) find an improvement in selecting indicators according to the forecast variable prior to factor estimation (targeted predictors). In particular, they propose using the LARS‐EN algorithm to remove irrelevant predictors. In this paper, we adapt the Bai and Ng procedure to a setup in which data releases are delayed and staggered. In the pre‐selection step, we replace actual data with estimates obtained on the basis of past information, where the structure of the available information replicates the one a forecaster would face in real time. We estimate on the reduced dataset the dynamic factor model of Giannone et al. (Journal of Monetary Economics 2008; 55 : 665–676) and Doz et al. (Journal of Econometrics 2011; 164 : 188–205), which is particularly suitable for the very short‐term forecast of GDP. A pseudo real‐time evaluation on French data shows the potential of our approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
This paper examines the forecasting ability of the nonlinear specifications of the market model. We propose a conditional two‐moment market model with a time‐varying systematic covariance (beta) risk in the form of a mean reverting process of the state‐space model via the Kalman filter algorithm. In addition, we account for the systematic component of co‐skewness and co‐kurtosis by considering higher moments. The analysis is implemented using data from the stock indices of several developed and emerging stock markets. The empirical findings favour the time‐varying market model approaches, which outperform linear model specifications both in terms of model fit and predictability. Precisely, higher moments are necessary for datasets that involve structural changes and/or market inefficiencies which are common in most of the emerging stock markets. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
We propose an innovative approach to model and predict the outcome of football matches based on the Poisson autoregression with exogenous covariates (PARX) model recently proposed by Agosto, Cavaliere, Kristensen, and Rahbek (Journal of Empirical Finance, 2016, 38(B), 640–663). We show that this methodology is particularly suited to model the goal distribution of a football team and provides a good forecast performance that can be exploited to develop a profitable betting strategy. This paper improves the strand of literature on Poisson‐based models, by proposing a specification able to capture the main characteristics of goal distribution. The betting strategy is based on the idea that the odds proposed by the market do not reflect the true probability of the match because they may also incorporate the betting volumes or strategic price settings in order to exploit betters' biases. The out‐of‐sample performance of the PARX model is better than the reference approach by Dixon and Coles (Applied Statistics, 1997, 46(2), 265–280). We also evaluate our approach in a simple betting strategy, which is applied to English football Premier League data for the 2013–2014, 2014–2015, and 2015–2016 seasons. The results show that the return from the betting strategy is larger than 30% in most of the cases considered and may even exceed 100% if we consider an alternative strategy based on a predetermined threshold, which makes it possible to exploit the inefficiency of the betting market.  相似文献   

10.
This paper extends the ‘remarkable property’ of Breusch (Journal of Econometrics 1987; 36 : 383–389) and Baltagi and Li (Journal of Econometrics 1992; 53 : 45–51) to the three‐way random components framework. Indeed, like its one‐way and two‐way counterparts, the three‐way random effects model maximum likelihood estimation can be obtained as an iterated generalized least squares procedure through an appropriate algorithm of monotonic sequences of some variance components ratios, θi (i = 2, 3, 4). More specifically, a search over θiwhile iterating on the regression coefficients estimates β and the other θjwill guard against the possibility of multiple local maxima of the likelihood function. In addition, the derivations of related prediction functions are obtained based on complete as well as incomplete panels. Finally, an application to international trade issues modeling is presented. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
This paper proposes an adjustment of linear autoregressive conditional mean forecasts that exploits the predictive content of uncorrelated model residuals. The adjustment is motivated by non‐Gaussian characteristics of model residuals, and implemented in a semiparametric fashion by means of conditional moments of simulated bivariate distributions. A pseudo ex ante forecasting comparison is conducted for a set of 494 macroeconomic time series recently collected by Dees et al. (Journal of Applied Econometrics 2007; 22: 1–38). In total, 10,374 time series realizations are contrasted against competing short‐, medium‐ and longer‐term purely autoregressive and adjusted predictors. With regard to all forecast horizons, the adjusted predictions consistently outperform conditionally Gaussian forecasts according to cross‐sectional mean group evaluation of absolute forecast errors and directional accuracy. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
Poisson integer‐valued auto‐regressive process of order 1 (PINAR(1)) due to Al‐Osh and Alzaid (Journal of Time Series Analysis 1987; 8 (3): 261–275) and McKenzie (Advances in Applied Probability 1988; 20 (4): 822–835) has received a significant attention in modelling low‐count time series during the last two decades because of its simplicity. But in many practical scenarios, the process appears to be inadequate, especially when data are overdispersed in nature. This overdispersion occurs mainly for three reasons: presence of some extreme values, large number of zeros, and presence of both extreme values with a large number of zeros. In this article, we develop a zero‐inflated Poisson INAR(1) process as an alternative to the PINAR(1) process when the number of zeros in the data is larger than the expected number of zeros by the Poisson process. We investigate some important properties such as stationarity, ergodicity, autocorrelation structure, and conditional distribution, with a detailed study on h‐step‐ahead coherent forecasting. A comparative study among different methods of parameter estimation is carried out using some simulated data. One real dataset is analysed for practical illustration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

14.
Changes in mortality rates have an impact on the life insurance industry, the financial sector (as a significant proportion of the financial markets is driven by pension funds), governmental agencies, and decision makers and policymakers. Thus the pricing of financial, pension and insurance products that are contingent upon survival or death and which is related to the accuracy of central mortality rates is of key importance. Recently, a temperature‐related mortality (TRM) model was proposed by Seklecka et al. (Journal of Forecasting, 2017, 36(7), 824–841), and it has shown evidence of outperformance compared with the Lee and Carter (Journal of the American Statistical Association, 1992, 87, 659–671) model and several others of its extensions, when mortality‐experience data from the UK are used. There is a need for awareness, when fitting the TRM model, of model risk when assessing longevity‐related liabilities, especially when pricing long‐term annuities and pensions. In this paper, the impact of uncertainty on the various parameters involved in the model is examined. We demonstrate a number of ways to quantify model risk in the estimation of the temperature‐related parameters, the choice of the forecasting methodology, the structures of actuarial products chosen (e.g., annuity, endowment and life insurance), and the actuarial reserve. Finally, several tables and figures illustrate the main findings of this paper.  相似文献   

15.
In this paper we extend the works of Baillie and Baltagi (1999, in Analysis of Panels and Limited Dependent Variables Models, Hsiao C et al. (eds). Cambridge University Press: Cambridge, UK; 255–267) and generalize certain results from the Baltagi and Li (1992, Journal of Forecasting 11 : 561–567) paper accounting for AR(1) errors in the disturbance term. In particular, we derive six predictors for the one‐way error components model, as well as their associated asymptotic mean squared error of multi‐step prediction in the presence of AR(1) errors in the disturbance term. In addition, we also provide both theoretical and simulation evidence as to the relative efficiency of our alternative predictors. The adequacy of the prediction AMSE formula is also investigated by the use of Monte Carlo methods and indicates that the ordinary optimal predictor performs well for various accuracy criteria. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
We propose a method approach. We use six international stock price indices and three hypothetical portfolios formed by these indices. The sample was observed daily from 1 January 1996 to 31 December 2006. Confirmed by the failure rates and backtesting developed by Kupiec (Technique for verifying the accuracy of risk measurement models. Journal of Derivatives 1995; 3 : 73–84) and Christoffersen (Evaluating interval forecasts. International Economic Review 1998; 39 : 841–862), the empirical results show that our method can considerably improve the estimation accuracy of value‐at‐risk. Thus the study establishes an effective alternative model for risk prediction and hence also provides a reliable tool for the management of portfolios. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

17.
A recent study by Rapach, Strauss, and Zhou (Journal of Finance, 2013, 68(4), 1633–1662) shows that US stock returns can provide predictive content for international stock returns. We extend their work from a volatility perspective. We propose a model, namely a heterogeneous volatility spillover–generalized autoregressive conditional heteroskedasticity model, to investigate volatility spillover. The model specification is parsimonious and can be used to analyze the time variation property of the spillover effect. Our in‐sample evidence shows the existence of strong volatility spillover from the US to five major stock markets and indicates that the spillover was stronger during business cycle recessions in the USA. Out‐of‐sample results show that accounting for spillover information from the USA can significantly improve the forecasting accuracy of international stock price volatility.  相似文献   

18.
Financial data series are often described as exhibiting two non‐standard time series features. First, variance often changes over time, with alternating phases of high and low volatility. Such behaviour is well captured by ARCH models. Second, long memory may cause a slower decay of the autocorrelation function than would be implied by ARMA models. Fractionally integrated models have been offered as explanations. Recently, the ARFIMA–ARCH model class has been suggested as a way of coping with both phenomena simultaneously. For estimation we implement the bias correction of Cox and Reid ( 1987 ). For daily data on the Swiss 1‐month Euromarket interest rate during the period 1986–1989, the ARFIMA–ARCH (5,d,2/4) model with non‐integer d is selected by AIC. Model‐based out‐of‐sample forecasts for the mean are better than predictions based on conditionally homoscedastic white noise only for longer horizons (τ > 40). Regarding volatility forecasts, however, the selected ARFIMA–ARCH models dominate. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
Temperature changes are known to affect the social and environmental determinants of health in various ways. Consequently, excess deaths as a result of extreme weather conditions may increase over the coming decades because of climate change. In this paper, the relationship between trends in mortality and trends in temperature change (as a proxy) is investigated using annual data and for specified (warm and cold) periods during the year in the UK. A thoughtful statistical analysis is implemented and a new stochastic, central mortality rate model is proposed. The new model encompasses the good features of the Lee and Carter (Journal of the American Statistical Association, 1992, 87: 659–671) model and its recent extensions, and for the very first time includes an exogenous factor which is a temperature‐related factor. The new model is shown to provide a significantly better‐fitting performance and more interpretable forecasts. An illustrative example of pricing a life insurance product is provided and discussed.  相似文献   

20.
A new clustered correlation multivariate generalized autoregressive conditional heteroskedasticity (CC‐MGARCH) model that allows conditional correlations to form clusters is proposed. This model generalizes the time‐varying correlation structure of Tse and Tsui (2002, Journal of Business and Economic Statistics 20 : 351–361) by classifying the correlations among the series into groups. To estimate the proposed model, Markov chain Monte Carlo methods are adopted. Two efficient sampling schemes for drawing discrete indicators are also developed. Simulations show that these efficient sampling schemes can lead to substantial savings in computation time in Monte Carlo procedures involving discrete indicators. Empirical examples using stock market and exchange rate data are presented in which two‐cluster and three‐cluster models are selected using posterior probabilities. This implies that the conditional correlation equation is likely to be governed by more than one set of decaying parameters. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号