首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 90 毫秒
1.
Socioeconomic status is commonly conceptualized as the social standing or well‐being of an individual or society. Higher socioeconomic status has long been identified as a contributing factor for mortality improvement. This paper studies the impact of macroeconomic fluctuations (having gross domestic product (GDP) as a proxy) on mortality for the nine most populous eurozone countries. Based on the statistical analysis between the time‐dependent indicator of the Lee and Carter (Journal of the American Statistical Association, 1992, 87(419), 659–671) model and GDP, and adaptation of the good features of the O'Hare and Li (Insurance: Mathematics and Economics, 2012, 50, 12–25) model, a new mortality model including this additional economic‐related factor is proposed. Results for male and female from ages between 0 and 89, and similar for unisex data, are provided. This new model shows a better fitting and more plausible forecast among a significant number of eurozone countries. An in‐depth analysis of our findings is provided to give a better understanding of the relationship between mortality and GDP fluctuations.  相似文献   

2.
Changes in mortality rates have an impact on the life insurance industry, the financial sector (as a significant proportion of the financial markets is driven by pension funds), governmental agencies, and decision makers and policymakers. Thus the pricing of financial, pension and insurance products that are contingent upon survival or death and which is related to the accuracy of central mortality rates is of key importance. Recently, a temperature‐related mortality (TRM) model was proposed by Seklecka et al. (Journal of Forecasting, 2017, 36(7), 824–841), and it has shown evidence of outperformance compared with the Lee and Carter (Journal of the American Statistical Association, 1992, 87, 659–671) model and several others of its extensions, when mortality‐experience data from the UK are used. There is a need for awareness, when fitting the TRM model, of model risk when assessing longevity‐related liabilities, especially when pricing long‐term annuities and pensions. In this paper, the impact of uncertainty on the various parameters involved in the model is examined. We demonstrate a number of ways to quantify model risk in the estimation of the temperature‐related parameters, the choice of the forecasting methodology, the structures of actuarial products chosen (e.g., annuity, endowment and life insurance), and the actuarial reserve. Finally, several tables and figures illustrate the main findings of this paper.  相似文献   

3.
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
In recent years, factor models have received increasing attention from both econometricians and practitioners in the forecasting of macroeconomic variables. In this context, Bai and Ng (Journal of Econometrics 2008; 146 : 304–317) find an improvement in selecting indicators according to the forecast variable prior to factor estimation (targeted predictors). In particular, they propose using the LARS‐EN algorithm to remove irrelevant predictors. In this paper, we adapt the Bai and Ng procedure to a setup in which data releases are delayed and staggered. In the pre‐selection step, we replace actual data with estimates obtained on the basis of past information, where the structure of the available information replicates the one a forecaster would face in real time. We estimate on the reduced dataset the dynamic factor model of Giannone et al. (Journal of Monetary Economics 2008; 55 : 665–676) and Doz et al. (Journal of Econometrics 2011; 164 : 188–205), which is particularly suitable for the very short‐term forecast of GDP. A pseudo real‐time evaluation on French data shows the potential of our approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
Upon the evidence that infinite‐order vector autoregression setting is more realistic in time series models, we propose new model selection procedures for producing efficient multistep forecasts. They consist of order selection criteria involving the sample analog of the asymptotic approximation of the h‐step‐ahead forecast mean squared error matrix, where h is the forecast horizon. These criteria are minimized over a truncation order nT under the assumption that an infinite‐order vector autoregression can be approximated, under suitable conditions, with a sequence of truncated models, where nT is increasing with sample size. Using finite‐order vector autoregressive models with various persistent levels and realistic sample sizes, Monte Carlo simulations show that, overall, our criteria outperform conventional competitors. Specifically, they tend to yield better small‐sample distribution of the lag‐order estimates around the true value, while estimating it with relatively satisfactory probabilities. They also produce more efficient multistep (and even stepwise) forecasts since they yield the lowest h‐step‐ahead forecast mean squared errors for the individual components of the holding pseudo‐data to forecast. Thus estimating the actual autoregressive order as well as the best forecasting model can be achieved with the same selection procedure. Such results stand in sharp contrast to the belief that parsimony is a virtue in itself, and state that the relative accuracy of strongly consistent criteria such as the Schwarz information criterion, as claimed in the literature, is overstated. Our criteria are new tools extending those previously existing in the literature and hence can suitably be used for various practical situations when necessary. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
We propose a new class of limited information estimators built upon an explicit trade‐off between data fitting and a priori model specification. The estimators offer the researcher a continuum of estimators that range from an extreme emphasis on data fitting and robust reduced‐form estimation to the other extreme of exact model specification and efficient estimation. The approach used to generate the estimators illustrates why ULS often outperforms 2SLS‐PRRF even in the context of a correctly specified model, provides a new interpretation of 2SLS, and integrates Wonnacott and Wonnacott's (1970) least weighted variance estimators with other techniques. We apply the new class of estimators to Klein's Model I and generate forecasts. We find for this example that an emphasis on specification (as opposed to data fitting) produces better out‐of‐sample predictions. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

8.
This article studies Man and Tiao's (2006) low‐order autoregressive fractionally integrated moving‐average (ARFIMA) approximation to Tsai and Chan's (2005b) limiting aggregate structure of the long‐memory process. In matching the autocorrelations, we demonstrate that the approximation works well, especially for larger d values. In computing autocorrelations over long lags for larger d value, using the exact formula one might encounter numerical problems. The use of the ARFIMA(0, d, d?1) model provides a useful alternative to compute the autocorrelations as a really close approximation. In forecasting future aggregates, we demonstrate the close performance of using the ARFIMA(0, d, d?1) model and the exact aggregate structure. In practice, this provides a justification for the use of a low‐order ARFIMA model in predicting future aggregates of long‐memory process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
The paper develops an oil price forecasting technique which is based on the present value model of rational commodity pricing. The approach suggests shifting the forecasting problem to the marginal convenience yield, which can be derived from the cost‐of‐carry relationship. In a recursive out‐of‐sample analysis, forecast accuracy at horizons within one year is checked by the root mean squared error as well as the mean error and the frequency of a correct direction‐of‐change prediction. For all criteria employed, the proposed forecasting tool outperforms the approach of using futures prices as direct predictors of future spot prices. Vis‐à‐vis the random‐walk model, it does not significantly improve forecast accuracy but provides valuable statements on the direction of change. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

10.
The conventional growth rate measures (such as month‐on‐month, year‐on‐year growth rates and 6‐month smoothed annualized rate adopted by the US Bureau of Labor Statistics and Economic Cycle Research Institute) are popular and can be easily obtained by computing the growth rate for monthly data based on a fixed comparison benchmark, although they do not make good use of the information underlying the economic series. By focusing on the monthly data, this paper proposes the k‐month kernel‐weighted annualized rate (k‐MKAR), which includes most existing growth rate measures as special cases. The proposed k‐MKAR measure involves the selection of smoothing parameters that are associated with the accuracy and timeliness for detecting the change in business turning points. That is, the comparison base is flexible and is likely to vary for different series under consideration. A data‐driven procedure depending upon the stepwise multiple reality check test for choosing the smoothing parameters is also suggested in this paper. The simple numerical evaluation and Monte Carlo experiment are conducted to confirm that our measures (in particular the two‐parameter k‐MKAR) improve the timeliness subject to a certain degree of accuracy. The business cycle signals issued by the Council for Economic Planning and Development over the period from 1998 to 2009 in Taiwan are taken as an example to illustrate the empirical application of our method. The empirical results show that the k‐MKAR‐based score lights are more capable of reflecting turning points earlier than the conventional year‐on‐year measure without sacrificing accuracy. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
This paper investigates the trade‐off between timeliness and quality in nowcasting practices. This trade‐off arises when the frequency of the variable to be nowcast, such as gross domestic product (GDP), is quarterly, while that of the underlying panel data is monthly; and the latter contains both survey and macroeconomic data. These two categories of data have different properties regarding timeliness and quality: the survey data are timely available (but might possess less predictive power), while the macroeconomic data possess more predictive power (but are not timely available because of their publication lags). In our empirical analysis, we use a modified dynamic factor model which takes three refinements for the standard dynamic factor model of Stock and Watson (Journal of Business and Economic Statistics, 2002, 20, 147–162) into account, namely mixed frequency, preselections and cointegration among the economic variables. Our main finding from a historical nowcasting simulation based on euro area GDP is that the predictive power of the survey data depends on the economic circumstances; namely, that survey data are more useful in tranquil times, and less so in times of turmoil.  相似文献   

12.
We consider the linear time‐series model yt=dt+ut(t=1,...,n), where dt is the deterministic trend and ut the stochastic term which follows an AR(1) process; ut=θut−1t, with normal innovations ϵt. Various assumptions about the start‐up will be made. Our main interest lies in the behaviour of the l‐period‐ahead forecast yn+1 near θ=1. Unlike in other studies of the AR(1) unit root process, we do not wish to ask the question whether θ=1 but are concerned with the behaviour of the forecast estimate near and at θ=1. For this purpose we define the sth (s=1,2) order sensitivity measure λl(s) of the forecast yn+1 near θ=1. This measures the sensitivity of the forecast at the unit root. In this study we consider two deterministic trends: dtt and dtttt. The forecast will be the Best Linear Unbiased forecast. We show that, when dtt, the number of observations has no effect on forecast sensitivity. When the deterministic trend is linear, the sensitivity is zero. We also develop a large‐sample procedure to measure the forecast sensitivity when we are uncertain whether to include the linear trend. Our analysis suggests that, depending on the initial conditions, it is better to include a linear trend for reduced sensitivity of the medium‐term forecast. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

13.
This paper examines the forecasting ability of the nonlinear specifications of the market model. We propose a conditional two‐moment market model with a time‐varying systematic covariance (beta) risk in the form of a mean reverting process of the state‐space model via the Kalman filter algorithm. In addition, we account for the systematic component of co‐skewness and co‐kurtosis by considering higher moments. The analysis is implemented using data from the stock indices of several developed and emerging stock markets. The empirical findings favour the time‐varying market model approaches, which outperform linear model specifications both in terms of model fit and predictability. Precisely, higher moments are necessary for datasets that involve structural changes and/or market inefficiencies which are common in most of the emerging stock markets. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
P. C. B. Phillips (1998) demonstrated that deterministic trends are a valid representation of an otherwise stochastic trending mechanism; he remained skeptic, however, about the predictive power of such representations. In this paper we prove that forecasts built upon spurious regression may perform (asymptotically) as well as those issued from a correctly specified regression. We derive the order in probability of several in‐sample and out‐of‐sample predictability criteria ( test, root mean square error, Theil's U‐statistics and R2) using forecasts based upon a least squares‐estimated regression between independent variables generated by a variety of empirically relevant data‐generating processes. It is demonstrated that, when the variables are mean stationary or trend stationary, the order in probability of these criteria is the same whether the regression is spurious or not. Simulation experiments confirm our asymptotic results. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
This paper considers the problem of forecasting in a panel data model with random individual effects and MA (q) remainder disturbances. It utilizes a recursive transformation for the MA (q) process derived by Baltagi and Li (Econometric Theory 1994; 10 : 396–408) which yields a simple generalized least‐squares estimator for this model. This recursive transformation is used in conjunction with Goldberger's result (Journal of the American Statistical Association 1962; 57 : 369–375) to derive an analytic expression for the best linear unbiased predictor, for the ith cross‐sectional unit, s periods ahead. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
Singular spectrum analysis (SSA) is a powerful nonparametric method in the area of time series analysis that has shown its capability in different applications areas. SSA depends on two main choices: the window length L and the number of eigentriples used for grouping r. One of the most important issues when analyzing time series is the forecast of new observations. When using SSA for time series forecasting there are several alternative algorithms, the most widely used being the recurrent forecasting model, which assumes that a given observation can be written as a linear combination of the L?1 previous observations. However, when the window length L is large, the forecasting model is unlikely to be parsimonious. In this paper we propose a new parsimonious recurrent forecasting model that uses an optimal m(<L?1) coefficients in the linear combination of the recurrent SSA. Our results support the idea of using this new parsimonious recurrent forecasting model instead of the standard recurrent SSA forecasting model.  相似文献   

17.
In this paper, we examine the use of non‐parametric Neural Network Regression (NNR) and Recurrent Neural Network (RNN) regression models for forecasting and trading currency volatility, with an application to the GBP/USD and USD/JPY exchange rates. Both the results of the NNR and RNN models are benchmarked against the simpler GARCH alternative and implied volatility. Two simple model combinations are also analysed. The intuitively appealing idea of developing a nonlinear nonparametric approach to forecast FX volatility, identify mispriced options and subsequently develop a trading strategy based upon this process is implemented for the first time on a comprehensive basis. Using daily data from December 1993 through April 1999, we develop alternative FX volatility forecasting models. These models are then tested out‐of‐sample over the period April 1999–May 2000, not only in terms of forecasting accuracy, but also in terms of trading efficiency: in order to do so, we apply a realistic volatility trading strategy using FX option straddles once mispriced options have been identified. Allowing for transaction costs, most trading strategies retained produce positive returns. RNN models appear as the best single modelling approach yet, somewhat surprisingly, model combination which has the best overall performance in terms of forecasting accuracy, fails to improve the RNN‐based volatility trading results. Another conclusion from our results is that, for the period and currencies considered, the currency option market was inefficient and/or the pricing formulae applied by market participants were inadequate. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

18.
This paper proposes a new mixture GARCH model with a dynamic mixture proportion. The mixture Gaussian distribution of the error can vary from time to time. The Bayesian Information Criterion and the EM algorithm are used to estimate the number of parameters as well as the model parameters and their standard errors. The new model is applied to the S&P500 Index and Hang Seng Index and compared with GARCH models with Gaussian error and Student's t error. The result shows that the IGARCH effect in these index returns could be the result of the mixture of one stationary volatility component with another non‐stationary volatility component. The VaR based on the new model performs better than traditional GARCH‐based VaRs, especially in unstable stock markets. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
This article develops a new method for detrending time series. It is shown how, in a Bayesian framework, a generalized version of the Hodrick–Prescott filter is obtained by specifying prior densities on the signal‐to‐noise ratio (q) in the underlying unobserved components model. This helps ensure an appropriate degree of smoothness in the estimated trend while allowing for uncertainty in q. The article discusses the important issue of prior elicitation for time series recorded at different frequencies. By combining prior expectations with the likelihood, the Bayesian approach permits detrending in a way that is more consistent with the properties of the series. The method is illustrated with some quarterly and annual US macroeconomic series. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
This paper extends the ‘remarkable property’ of Breusch (Journal of Econometrics 1987; 36 : 383–389) and Baltagi and Li (Journal of Econometrics 1992; 53 : 45–51) to the three‐way random components framework. Indeed, like its one‐way and two‐way counterparts, the three‐way random effects model maximum likelihood estimation can be obtained as an iterated generalized least squares procedure through an appropriate algorithm of monotonic sequences of some variance components ratios, θi (i = 2, 3, 4). More specifically, a search over θiwhile iterating on the regression coefficients estimates β and the other θjwill guard against the possibility of multiple local maxima of the likelihood function. In addition, the derivations of related prediction functions are obtained based on complete as well as incomplete panels. Finally, an application to international trade issues modeling is presented. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号