首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 718 毫秒
1.
The autoregressive conditional heteroscedastic (ARCH) model and its extensions have been widely used in modelling changing variances in financial time series. Since the asset return distributions frequently display tails heavier than normal distributions, it is worth while studying robust ARCH modelling without a specific distribution assumption. In this paper, rather than modelling the conditional variance, we study ARCH modelling for the conditional scale. We examine the L1‐estimation of ARCH models and derive the limiting distributions of the estimators. A robust standardized absolute residual autocorrelation based on least absolute deviation estimation is proposed. Then a robust portmanteau statistic is constructed to test the adequacy of the model, especially the specification of the conditional scale. We obtain their asymptotic distributions under mild conditions. Examples show that the suggested L1‐norm estimators and the goodness‐of‐fit test are robust against error distributions and are accurate for moderate sample sizes. This paper provides a useful tool in modelling conditional heteroscedastic time series data. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
We consider one parametric and five semiparametric approaches to estimate D in SARFIMA (0, D, 0)s processes, that is, when the process is a fractionally integrated ARMA model with seasonality s. We also consider h‐step‐ahead forecasting for these processes. We present the proof of some features of this model and also a study based on a Monte Carlo simulation for different sample sizes and different seasonal periods. We compare the different estimation procedures analyzing the bias, the mean squared error values, and the confidence intervals for the estimators. We also consider three different methods to choose the total number of regressors in the regression analysis for the semiparametric class of estimation procedures. We apply the methodology to the Nile River flow monthly data, and also to a simulated seasonal fractionally integrated time series. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

3.
Volatility models such as GARCH, although misspecified with respect to the data‐generating process, may well generate volatility forecasts that are unconditionally unbiased. In other words, they generate variance forecasts that, on average, are equal to the integrated variance. However, many applications in finance require a measure of return volatility that is a non‐linear function of the variance of returns, rather than of the variance itself. Even if a volatility model generates forecasts of the integrated variance that are unbiased, non‐linear transformations of these forecasts will be biased estimators of the same non‐linear transformations of the integrated variance because of Jensen's inequality. In this paper, we derive an analytical approximation for the unconditional bias of estimators of non‐linear transformations of the integrated variance. This bias is a function of the volatility of the forecast variance and the volatility of the integrated variance, and depends on the concavity of the non‐linear transformation. In order to estimate the volatility of the unobserved integrated variance, we employ recent results from the realized volatility literature. As an illustration, we estimate the unconditional bias for both in‐sample and out‐of‐sample forecasts of three non‐linear transformations of the integrated standard deviation of returns for three exchange rate return series, where a GARCH(1, 1) model is used to forecast the integrated variance. Our estimation results suggest that, in practice, the bias can be substantial. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
Temperature changes are known to affect the social and environmental determinants of health in various ways. Consequently, excess deaths as a result of extreme weather conditions may increase over the coming decades because of climate change. In this paper, the relationship between trends in mortality and trends in temperature change (as a proxy) is investigated using annual data and for specified (warm and cold) periods during the year in the UK. A thoughtful statistical analysis is implemented and a new stochastic, central mortality rate model is proposed. The new model encompasses the good features of the Lee and Carter (Journal of the American Statistical Association, 1992, 87: 659–671) model and its recent extensions, and for the very first time includes an exogenous factor which is a temperature‐related factor. The new model is shown to provide a significantly better‐fitting performance and more interpretable forecasts. An illustrative example of pricing a life insurance product is provided and discussed.  相似文献   

5.
A unified method to detect and handle innovational and additive outliers, and permanent and transient level changes has been presented by R. S. Tsay. N. S. Balke has found that the presence of level changes may lead to misidentification and loss of test‐power, and suggests augmenting Tsay's procedure by conducting an additional disturbance search based on a white‐noise model. While Tsay allows level changes to be either permanent or transient, Balke considers only the former type. Based on simulated series with transient level changes this paper investigates how Balke's white‐noise model performs both when transient change is omitted from the model specification and when it is included. Our findings indicate that the alleged misidentification of permanent level changes may be influenced by the restrictions imposed by Balke. But when these restrictions are removed, Balke's procedure outperforms Tsay's in detecting changes in the data‐generating process. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

6.
This paper considers univariate and multivariate models to forecast monthly conflict events in the Sudan over the out‐of‐sample period 2009–2012. The models used to generate these forecasts were based on a specification from a machine learning algorithm fit to 2000–2008 monthly data. The model that includes previous month's wheat price performs better than a similar model which does not include past wheat prices (the univariate model). Both models did not perform well in forecasting conflict in a neighborhood of the 2012 ‘Heglig crisis’. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

7.
This paper assesses the informational content of alternative realized volatility estimators, daily range and implied volatility in multi‐period out‐of‐sample Value‐at‐Risk (VaR) predictions. We use the recently proposed Realized GARCH model combined with the skewed Student's t distribution for the innovations process and a Monte Carlo simulation approach in order to produce the multi‐period VaR estimates. Our empirical findings, based on the S&P 500 stock index, indicate that almost all realized and implied volatility measures can produce statistically and regulatory precise VaR forecasts across forecasting horizons, with the implied volatility being especially accurate in monthly VaR forecasts. The daily range produces inferior forecasting results in terms of regulatory accuracy and Basel II compliance. However, robust realized volatility measures, which are immune against microstructure noise bias or price jumps, generate superior VaR estimates in terms of capital efficiency, as they minimize the opportunity cost of capital and the Basel II regulatory capital. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we investigate the performance of a class of M‐estimators for both symmetric and asymmetric conditional heteroscedastic models in the prediction of value‐at‐risk. The class of estimators includes the least absolute deviation (LAD), Huber's, Cauchy and B‐estimator, as well as the well‐known quasi maximum likelihood estimator (QMLE). We use a wide range of summary statistics to compare both the in‐sample and out‐of‐sample VaR estimates of three well‐known stock indices. Our empirical study suggests that in general Cauchy, Huber and B‐estimator have better performance in predicting one‐step‐ahead VaR than the commonly used QMLE. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven‐variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non‐stationary, stationary and error‐correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non‐stationary specification outperformed those of the stationary and error‐correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error‐correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
We propose a new portfolio optimization method combining the merits of the shrinkage estimation, vine copula structure, and Black–Litterman model. It is useful for many investors to satisfy simultaneously the three investment objectives: estimation sensitivity, asymmetric risks appreciation, and portfolio stability. A typical investor with such objectives is a sovereign wealth fund (SWF). We use China's SWF as an example to empirically test the method based on a 15‐asset strategic asset allocation problem. Robustness tests using subsamples not only show the method's overall effectiveness but also manifest that the function of each component is as expected.  相似文献   

11.
To forecast realized volatility, this paper introduces a multiplicative error model that incorporates heterogeneous components: weekly and monthly realized volatility measures. While the model captures the long‐memory property, estimation simply proceeds using quasi‐maximum likelihood estimation. This paper investigates its forecasting ability using the realized kernels of 34 different assets provided by the Oxford‐Man Institute's Realized Library. The model outperforms benchmark models such as ARFIMA, HAR, Log‐HAR and HEAVY‐RM in within‐sample fitting and out‐of‐sample (1‐, 10‐ and 22‐step) forecasts. It performed best in both pointwise and cumulative comparisons of multi‐step‐ahead forecasts, regardless of loss function (QLIKE or MSE). Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

12.
Socioeconomic status is commonly conceptualized as the social standing or well‐being of an individual or society. Higher socioeconomic status has long been identified as a contributing factor for mortality improvement. This paper studies the impact of macroeconomic fluctuations (having gross domestic product (GDP) as a proxy) on mortality for the nine most populous eurozone countries. Based on the statistical analysis between the time‐dependent indicator of the Lee and Carter (Journal of the American Statistical Association, 1992, 87(419), 659–671) model and GDP, and adaptation of the good features of the O'Hare and Li (Insurance: Mathematics and Economics, 2012, 50, 12–25) model, a new mortality model including this additional economic‐related factor is proposed. Results for male and female from ages between 0 and 89, and similar for unisex data, are provided. This new model shows a better fitting and more plausible forecast among a significant number of eurozone countries. An in‐depth analysis of our findings is provided to give a better understanding of the relationship between mortality and GDP fluctuations.  相似文献   

13.
Prior studies use a linear adaptive expectations model to describe how analysts revise their forecasts of future earnings in response to current forecast errors. However, research shows that extreme forecast errors are less likely than small forecast errors to persist in future years. If analysts recognize this property, their marginal forecast revisions should decrease with the forecast error's magnitude. Therefore, a linear model is likely to be unsatisfactory at describing analysts' forecast revisions. We find that a non‐linear model better describes the relation between analysts' forecast revisions and their forecast errors, and provides a richer theoretical framework for explaining analysts' forecasting behaviour. Our results are consistent with analysts' recognizing the permanent and temporary nature of forecast errors of differing magnitudes. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

14.
This paper presents a new forecasting approach straddling the conventional methods applied to the Italian industrial production index. Specifically, the proposed method treats factor models and bridge models as complementary ingredients feeding a unique model specification. We document that the proposed approach improves upon traditional bridge models by making efficient use of the information conveyed by a large amount of survey data on manufacturing activity. Different factor algorithms are compared and, under the provision that a large estimation window is used, partial least squares outperform principal component‐based alternatives. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

15.
This paper discusses the asymptotic efficiency of estimators for optimal portfolios when returns are vector‐valued non‐Gaussian stationary processes. We give the asymptotic distribution of portfolio estimators ? for non‐Gaussian dependent return processes. Next we address the problem of asymptotic efficiency for the class of estimators ?. First, it is shown that there are some cases when the asymptotic variance of ? under non‐Gaussianity can be smaller than that under Gaussianity. The result shows that non‐Gaussianity of the returns does not always affect the efficiency badly. Second, we give a necessary and sufficient condition for ? to be asymptotically efficient when the return process is Gaussian, which shows that ? is not asymptotically efficient generally. From this point of view we propose to use maximum likelihood type estimators for g, which are asymptotically efficient. Furthermore, we investigate the problem of predicting the one‐step‐ahead optimal portfolio return by the estimated portfolio based on ? and examine the mean squares prediction error. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

16.
Poisson integer‐valued auto‐regressive process of order 1 (PINAR(1)) due to Al‐Osh and Alzaid (Journal of Time Series Analysis 1987; 8 (3): 261–275) and McKenzie (Advances in Applied Probability 1988; 20 (4): 822–835) has received a significant attention in modelling low‐count time series during the last two decades because of its simplicity. But in many practical scenarios, the process appears to be inadequate, especially when data are overdispersed in nature. This overdispersion occurs mainly for three reasons: presence of some extreme values, large number of zeros, and presence of both extreme values with a large number of zeros. In this article, we develop a zero‐inflated Poisson INAR(1) process as an alternative to the PINAR(1) process when the number of zeros in the data is larger than the expected number of zeros by the Poisson process. We investigate some important properties such as stationarity, ergodicity, autocorrelation structure, and conditional distribution, with a detailed study on h‐step‐ahead coherent forecasting. A comparative study among different methods of parameter estimation is carried out using some simulated data. One real dataset is analysed for practical illustration. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
This paper concerns Long‐term forecasts for cointegrated processes. First, it considers the case where the parameters of the model are known. The paper analytically shows that neither cointegration nor integration constraint matters in Long‐term forecasts. It is an alternative implication of Long‐term forecasts for cointegrated processes, extending the results of previous influential studies. The appropriate Mote Carlo experiment supports our analytical result. Secondly, and more importantly, it considers the case where the parameters of the model are estimated. The paper shows that accuracy of the estimation of the drift term is crucial in Long‐term forecasts. Namely, the relative accuracy of various Long‐term forecasts depends upon the relative magnitude of variances of estimators of the drift term. It further experimentally shows that in finite samples the univariate ARIMA forecast, whose drift term is estimated by the simple time average of differenced data, is better than the cointegrated system forecast, whose parameters are estimated by the well‐known Johansen's ML method. Based upon finite sample experiments, it recommends the univariate ARIMA forecast rather than the conventional cointegrated system forecast in finite samples for its practical usefulness and robustness against model misspecifications. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
Changes in mortality rates have an impact on the life insurance industry, the financial sector (as a significant proportion of the financial markets is driven by pension funds), governmental agencies, and decision makers and policymakers. Thus the pricing of financial, pension and insurance products that are contingent upon survival or death and which is related to the accuracy of central mortality rates is of key importance. Recently, a temperature‐related mortality (TRM) model was proposed by Seklecka et al. (Journal of Forecasting, 2017, 36(7), 824–841), and it has shown evidence of outperformance compared with the Lee and Carter (Journal of the American Statistical Association, 1992, 87, 659–671) model and several others of its extensions, when mortality‐experience data from the UK are used. There is a need for awareness, when fitting the TRM model, of model risk when assessing longevity‐related liabilities, especially when pricing long‐term annuities and pensions. In this paper, the impact of uncertainty on the various parameters involved in the model is examined. We demonstrate a number of ways to quantify model risk in the estimation of the temperature‐related parameters, the choice of the forecasting methodology, the structures of actuarial products chosen (e.g., annuity, endowment and life insurance), and the actuarial reserve. Finally, several tables and figures illustrate the main findings of this paper.  相似文献   

19.
We propose an innovative approach to model and predict the outcome of football matches based on the Poisson autoregression with exogenous covariates (PARX) model recently proposed by Agosto, Cavaliere, Kristensen, and Rahbek (Journal of Empirical Finance, 2016, 38(B), 640–663). We show that this methodology is particularly suited to model the goal distribution of a football team and provides a good forecast performance that can be exploited to develop a profitable betting strategy. This paper improves the strand of literature on Poisson‐based models, by proposing a specification able to capture the main characteristics of goal distribution. The betting strategy is based on the idea that the odds proposed by the market do not reflect the true probability of the match because they may also incorporate the betting volumes or strategic price settings in order to exploit betters' biases. The out‐of‐sample performance of the PARX model is better than the reference approach by Dixon and Coles (Applied Statistics, 1997, 46(2), 265–280). We also evaluate our approach in a simple betting strategy, which is applied to English football Premier League data for the 2013–2014, 2014–2015, and 2015–2016 seasons. The results show that the return from the betting strategy is larger than 30% in most of the cases considered and may even exceed 100% if we consider an alternative strategy based on a predetermined threshold, which makes it possible to exploit the inefficiency of the betting market.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号