首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Temperature changes are known to affect the social and environmental determinants of health in various ways. Consequently, excess deaths as a result of extreme weather conditions may increase over the coming decades because of climate change. In this paper, the relationship between trends in mortality and trends in temperature change (as a proxy) is investigated using annual data and for specified (warm and cold) periods during the year in the UK. A thoughtful statistical analysis is implemented and a new stochastic, central mortality rate model is proposed. The new model encompasses the good features of the Lee and Carter (Journal of the American Statistical Association, 1992, 87: 659–671) model and its recent extensions, and for the very first time includes an exogenous factor which is a temperature‐related factor. The new model is shown to provide a significantly better‐fitting performance and more interpretable forecasts. An illustrative example of pricing a life insurance product is provided and discussed.  相似文献   

2.
Socioeconomic status is commonly conceptualized as the social standing or well‐being of an individual or society. Higher socioeconomic status has long been identified as a contributing factor for mortality improvement. This paper studies the impact of macroeconomic fluctuations (having gross domestic product (GDP) as a proxy) on mortality for the nine most populous eurozone countries. Based on the statistical analysis between the time‐dependent indicator of the Lee and Carter (Journal of the American Statistical Association, 1992, 87(419), 659–671) model and GDP, and adaptation of the good features of the O'Hare and Li (Insurance: Mathematics and Economics, 2012, 50, 12–25) model, a new mortality model including this additional economic‐related factor is proposed. Results for male and female from ages between 0 and 89, and similar for unisex data, are provided. This new model shows a better fitting and more plausible forecast among a significant number of eurozone countries. An in‐depth analysis of our findings is provided to give a better understanding of the relationship between mortality and GDP fluctuations.  相似文献   

3.
Let {Xt} be a stationary process with spectral density g(λ).It is often that the true structure g(λ) is not completely specified. This paper discusses the problem of misspecified prediction when a conjectured spectral density fθ(λ), θ∈Θ, is fitted to g(λ). Then, constructing the best linear predictor based on fθ(λ), we can evaluate the prediction error M(θ). Since θ is unknown we estimate it by a quasi‐MLE . The second‐order asymptotic approximation of is given. This result is extended to the case when Xt contains some trend, i.e. a time series regression model. These results are very general. Furthermore we evaluate the second‐order asymptotic approximation of for a time series regression model having a long‐memory residual process with the true spectral density g(λ). Since the general formulae of the approximated prediction error are complicated, we provide some numerical examples. Then we illuminate unexpected effects from the misspecification of spectra. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

4.
This paper extends the ‘remarkable property’ of Breusch (Journal of Econometrics 1987; 36 : 383–389) and Baltagi and Li (Journal of Econometrics 1992; 53 : 45–51) to the three‐way random components framework. Indeed, like its one‐way and two‐way counterparts, the three‐way random effects model maximum likelihood estimation can be obtained as an iterated generalized least squares procedure through an appropriate algorithm of monotonic sequences of some variance components ratios, θi (i = 2, 3, 4). More specifically, a search over θiwhile iterating on the regression coefficients estimates β and the other θjwill guard against the possibility of multiple local maxima of the likelihood function. In addition, the derivations of related prediction functions are obtained based on complete as well as incomplete panels. Finally, an application to international trade issues modeling is presented. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Credibility models in actuarial science deal with multiple short time series where each series represents claim amounts of different insurance groups. Commonly used credibility models imply shrinkage of group-specific estimates towards their average. In this paper we model the claim size yu in group i and at time t as the sum of three independent components: yit = μr + δi + ?it. The first component, μt = μt?1 + mt, represents time-varying levels that are common to all groups. The second component, δi, represents random group offsets that are the same in all periods, and the third component represents independent measurement errors. In this paper we show how to obtain forecasts from this model and we discuss the nature of the forecasts, with particular emphasis on shrinkage. We also assess the forecast improvements that can be expected from such a model. Finally, we discuss an extension of the above model which also allows the group offsets to change over time. We assume that the offsets for different groups follow independent random walks.  相似文献   

6.
This paper investigates the trade‐off between timeliness and quality in nowcasting practices. This trade‐off arises when the frequency of the variable to be nowcast, such as gross domestic product (GDP), is quarterly, while that of the underlying panel data is monthly; and the latter contains both survey and macroeconomic data. These two categories of data have different properties regarding timeliness and quality: the survey data are timely available (but might possess less predictive power), while the macroeconomic data possess more predictive power (but are not timely available because of their publication lags). In our empirical analysis, we use a modified dynamic factor model which takes three refinements for the standard dynamic factor model of Stock and Watson (Journal of Business and Economic Statistics, 2002, 20, 147–162) into account, namely mixed frequency, preselections and cointegration among the economic variables. Our main finding from a historical nowcasting simulation based on euro area GDP is that the predictive power of the survey data depends on the economic circumstances; namely, that survey data are more useful in tranquil times, and less so in times of turmoil.  相似文献   

7.
This paper proposes an adjustment of linear autoregressive conditional mean forecasts that exploits the predictive content of uncorrelated model residuals. The adjustment is motivated by non‐Gaussian characteristics of model residuals, and implemented in a semiparametric fashion by means of conditional moments of simulated bivariate distributions. A pseudo ex ante forecasting comparison is conducted for a set of 494 macroeconomic time series recently collected by Dees et al. (Journal of Applied Econometrics 2007; 22: 1–38). In total, 10,374 time series realizations are contrasted against competing short‐, medium‐ and longer‐term purely autoregressive and adjusted predictors. With regard to all forecast horizons, the adjusted predictions consistently outperform conditionally Gaussian forecasts according to cross‐sectional mean group evaluation of absolute forecast errors and directional accuracy. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
We introduce a versatile and robust model that may help policymakers, bond portfolio managers and financial institutions to gain insight into the future shape of the yield curve. The Burg model forecasts a 20‐day yield curve, which fits a pth‐order autoregressive (AR) model to the input signal by minimizing (least squares) the forward and backward prediction errors while constraining the autoregressive parameters to satisfy the Levinson–Durbin recursion. Then, it uses an infinite impulse response prediction error filter. Results are striking when the Burg model is compared to the Diebold and Li model: the model not only significantly improves accuracy, but also its forecast yield curves stick to the shape of observed yield curves, whether normal, humped, flat or inverted. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
This paper concentrates on comparing estimation and forecasting ability of quasi‐maximum likelihood (QML) and support vector machines (SVM) for financial data. The financial series are fitted into a family of asymmetric power ARCH (APARCH) models. As the skewness and kurtosis are common characteristics of the financial series, a skew‐t distributed innovation is assumed to model the fat tail and asymmetry. Prior research indicates that the QML estimator for the APARCH model is inefficient when the data distribution shows departure from normality, so the current paper utilizes the semi‐parametric‐based SVM method and shows that it is more efficient than the QML under the skewed Student's‐t distributed error. As the SVM is a kernel‐based technique, we further investigate its performance by applying separately a Gaussian kernel and a wavelet kernel. The results suggest that the SVM‐based method generally performs better than QML for both in‐sample and out‐of‐sample data. The outcomes also highlight the fact that the wavelet kernel outperforms the Gaussian kernel with lower forecasting error, better generation capability and more computation efficiency. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

11.
Accurate mortality forecasts are of primary interest to insurance companies, pension providers and government welfare systems owing to the rapid increase in life expectancy during the past few decades. Existing mortality models in the literature tend to project future mortality rates by extracting the observed patterns in the mortality surface. Patterns found in the cohort dimension have received a considerable amount of attention and are included in many models of mortality. However, to our knowledge very few studies have considered an evaluation and comparison of cohort patterns across different countries. Moreover, the answer to the question of how the incorporation of the cohort effect affects the forecasting performance of mortality models still remains unclear. In this paper we introduce a new way of incorporating the cohort effect at the beginning of the estimation stage via the implementation of kernel smoothing techniques. Bivariate standard normal kernel density is used and we capture the cohort effect by assigning greater weights along the diagonals of the mortality surface. Based on the results from our empirical study, we compare and discuss the differences in cohort strength across a range of developed countries. Further, the fitting and forecasting results demonstrate the superior performance of our model when compared to some well‐known mortality models in the literature under a majority of circumstances. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
In recent years, factor models have received increasing attention from both econometricians and practitioners in the forecasting of macroeconomic variables. In this context, Bai and Ng (Journal of Econometrics 2008; 146 : 304–317) find an improvement in selecting indicators according to the forecast variable prior to factor estimation (targeted predictors). In particular, they propose using the LARS‐EN algorithm to remove irrelevant predictors. In this paper, we adapt the Bai and Ng procedure to a setup in which data releases are delayed and staggered. In the pre‐selection step, we replace actual data with estimates obtained on the basis of past information, where the structure of the available information replicates the one a forecaster would face in real time. We estimate on the reduced dataset the dynamic factor model of Giannone et al. (Journal of Monetary Economics 2008; 55 : 665–676) and Doz et al. (Journal of Econometrics 2011; 164 : 188–205), which is particularly suitable for the very short‐term forecast of GDP. A pseudo real‐time evaluation on French data shows the potential of our approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper, we examine the use of non‐parametric Neural Network Regression (NNR) and Recurrent Neural Network (RNN) regression models for forecasting and trading currency volatility, with an application to the GBP/USD and USD/JPY exchange rates. Both the results of the NNR and RNN models are benchmarked against the simpler GARCH alternative and implied volatility. Two simple model combinations are also analysed. The intuitively appealing idea of developing a nonlinear nonparametric approach to forecast FX volatility, identify mispriced options and subsequently develop a trading strategy based upon this process is implemented for the first time on a comprehensive basis. Using daily data from December 1993 through April 1999, we develop alternative FX volatility forecasting models. These models are then tested out‐of‐sample over the period April 1999–May 2000, not only in terms of forecasting accuracy, but also in terms of trading efficiency: in order to do so, we apply a realistic volatility trading strategy using FX option straddles once mispriced options have been identified. Allowing for transaction costs, most trading strategies retained produce positive returns. RNN models appear as the best single modelling approach yet, somewhat surprisingly, model combination which has the best overall performance in terms of forecasting accuracy, fails to improve the RNN‐based volatility trading results. Another conclusion from our results is that, for the period and currencies considered, the currency option market was inefficient and/or the pricing formulae applied by market participants were inadequate. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

14.
Online auctions have become increasingly popular in recent years. There is a growing body of research on this topic, whereas modeling online auction price curves constitutes one of the most interesting problems. Most research treats price curves as deterministic functions, which ignores the random effects of external and internal factors. To account for the randomness, a more realistic model using stochastic differential equations is proposed in this paper. The online auction price is modeled by a stochastic differential equation in which the deterministic part is equivalent to the second‐order differential equation model proposed in Wang et al. (Journal of the American Statistical Association, 2008, 103, 1100–1118). The model also includes a component representing the measurement errors. Explicit expressions for the likelihood function are also obtained, from which statistical inference can be conducted. Forecast accuracy of the proposed model is compared with the ODE (ordinary differential equation) approach. Simulation results show that the proposed model performs better.  相似文献   

15.
This research proposes a prediction model of multistage financial distress (MSFD) after considering contextual and methodological issues regarding sampling, feature and model selection criteria. Financial distress is defined as a three‐stage process showing different nature and intensity of financial problems. It is argued that applied definition of distress is independent of legal framework and its predictability would provide more practical solutions. The final sample is selected after industry adjustments and oversampling the data. A wrapper subset data mining approach is applied to extract the most relevant features from financial statement and stock market indicators. An ensemble approach using a combination of DTNB (decision table and naïve base hybrid model), LMT (logistic model tree) and A2DE (alternative N dependence estimator) Bayesian models is used to develop the final prediction model. The performance of all the models is evaluated using a 10‐fold cross‐validation method. Results showed that the proposed model predicted MSFD with 84.06% accuracy. This accuracy increased to 89.57% when a 33.33% cut‐off value was considered. Hence the proposed model is accurate and reliable to identify the true nature and intensity of financial problems regardless of the contextual legal framework.  相似文献   

16.
A recent study by Rapach, Strauss, and Zhou (Journal of Finance, 2013, 68(4), 1633–1662) shows that US stock returns can provide predictive content for international stock returns. We extend their work from a volatility perspective. We propose a model, namely a heterogeneous volatility spillover–generalized autoregressive conditional heteroskedasticity model, to investigate volatility spillover. The model specification is parsimonious and can be used to analyze the time variation property of the spillover effect. Our in‐sample evidence shows the existence of strong volatility spillover from the US to five major stock markets and indicates that the spillover was stronger during business cycle recessions in the USA. Out‐of‐sample results show that accounting for spillover information from the USA can significantly improve the forecasting accuracy of international stock price volatility.  相似文献   

17.
The primary goal of this study was to propose an algorithm using mathematical programming to detect earnings management practices. In order to evaluate the ability of this proposed algorithm, the traditional statistical models are used as a benchmark vis‐à‐vis their time series counterparts. As emerging techniques in the area of mathematical programming yield better results, application of suitable models is expected to result in highly performed forecasts. The motivation behind this paper is to develop an algorithm which will succeed in detecting companies that appeal to financial manipulation. The methodology is based on cutting plane formulation using mathematical programming. A sample of 126 Turkish manufacturing firms described over 10 financial ratios and indexes are used for detecting factors associated with false financial statements. The results indicate that the proposed three‐phase cutting plane algorithm outperforms the traditional statistical techniques which are widely used for false financial statement detections. Furthermore, the results indicate that the investigation of financial information can be helpful towards the identification of false financial statements and highlight the importance of financial ratios/indexes such as Days' Sales in Receivables Index (DSRI), Gross Margin Index (GMI), Working Capital Accruals to Total Assets (TATA) and Days to Inventory Index (DINV). Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
Stochastic covariance models have been explored in recent research to model the interdependence of assets in financial time series. The approach uses a single stochastic model to capture such interdependence. However, it may be inappropriate to assume a single coherence structure at all time t. In this paper, we propose the use of a mixture of stochastic covariance models to generalize the approach and offer greater flexibility in real data applications. Parameter estimation is performed by Bayesian analysis with Markov chain Monte Carlo sampling schemes. We conduct a simulation study on three different model setups and evaluate the performance of estimation and model selection. We also apply our modeling methods to high‐frequency stock data from Hong Kong. Model selection favors a mixture rather than non‐mixture model. In a real data study, we demonstrate that the mixture model is able to identify structural changes in market risk, as evidenced by a drastic change in mixture proportions over time. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
This paper uses Markov switching models to capture volatility dynamics in exchange rates and to evaluate their forecasting ability. We identify that increased volatilities in four euro‐based exchange rates are due to underlying structural changes. Also, we find that currencies are closely related to each other, especially in high‐volatility periods, where cross‐correlations increase significantly. Using Markov switching Monte Carlo approach we provide evidence in favour of Markov switching models, rejecting random walk hypothesis. Testing in‐sample and out‐of‐sample Markov trading rules based on Dueker and Neely (Journal of Banking and Finance, 2007) we find that using econometric methodology is able to forecast accurately exchange rate movements. When applied to the Euro/US dollar and the euro/British pound daily returns data, the model provides exceptional out‐of‐sample returns. However, when applied to the euro/Brazilian real and the euro/Mexican peso, the model loses power. Higher volatility exercised in the Latin American currencies seems to be a critical factor for this failure. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

20.
An Erratum has been published for this article in Journal of Forecasting 22(6‐7) 2003, 551 The Black–Scholes formula is a well‐known model for pricing and hedging derivative securities. It relies, however, on several highly questionable assumptions. This paper examines whether a neural network (MLP) can be used to find a call option pricing formula better corresponding to market prices and the properties of the underlying asset than the Black–Scholes formula. The neural network method is applied to the out‐of‐sample pricing and delta‐hedging of daily Swedish stock index call options from 1997 to 1999. The relevance of a hedge‐analysis is stressed further in this paper. As benchmarks, the Black–Scholes model with historical and implied volatility estimates are used. Comparisons reveal that the neural network models outperform the benchmarks both in pricing and hedging performances. A moving block bootstrap is used to test the statistical significance of the results. Although the neural networks are superior, the results are sometimes insignificant at the 5% level. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号