首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 594 毫秒
1.
This paper proposes a new evaluation framework for interval forecasts. Our model‐free test can be used to evaluate interval forecasts and high‐density regions, potentially discontinuous and/or asymmetric. Using a simple J‐statistic, based on the moments defined by the orthonormal polynomials associated with the binomial distribution, this new approach presents many advantages. First, its implementation is extremely easy. Second, it allows for a separate test for unconditional coverage, independence and conditional coverage hypotheses. Third, Monte Carlo simulations show that for realistic sample sizes our GMM test has good small‐sample properties. These results are corroborated by an empirical application on SP500 and Nikkei stock market indexes. It confirms that using this GMM test leads to major consequences for the ex post evaluation of interval forecasts produced by linear versus nonlinear models. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
Density forecasts for weather variables are useful for the many industries exposed to weather risk. Weather ensemble predictions are generated from atmospheric models and consist of multiple future scenarios for a weather variable. The distribution of the scenarios can be used as a density forecast, which is needed for pricing weather derivatives. We consider one to 10‐day‐ahead density forecasts provided by temperature ensemble predictions. More specifically, we evaluate forecasts of the mean and quantiles of the density. The mean of the ensemble scenarios is the most accurate forecast for the mean of the density. We use quantile regression to debias the quantiles of the distribution of the ensemble scenarios. The resultant quantile forecasts compare favourably with those from a GARCH model. These results indicate the strong potential for the use of ensemble prediction in temperature density forecasting. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

3.
Using quantile regression this paper explores the predictability of the stock and bond return distributions as a function of economic state variables. The use of quantile regression allows us to examine specific parts of the return distribution such as the tails and the center, and for a sufficiently fine grid of quantiles we can trace out the entire distribution. A univariate quantile regression model is used to examine the marginal stock and bond return distributions, while a multivariate model is used to capture their joint distribution. An empirical analysis on US data shows that economic state variables predict the stock and bond return distributions in quite different ways in terms of, for example, location shifts, volatility and skewness. Comparing the different economic state variables in terms of their out‐of‐sample forecasting performance, the empirical analysis also shows that the relative accuracy of the state variables varies across the return distribution. Density forecasts based on an assumed normal distribution with forecasted mean and variance is compared to forecasts based on quantile estimates and, in general, the latter yields the best performance. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Motivated by the importance of coffee to Americans and the significance of the coffee subsector to the US economy, we pursue three notable innovations. First, we augment the traditional Phillips curve model with the coffee price as a predictor, and show that the resulting model outperforms the traditional variant in both in‐sample and out‐of‐sample predictability of US inflation. Second, we demonstrate the need to account for the inherent statistical features of predictors such as persistence, endogeneity, and conditional heteroskedasticity effects when dealing with US inflation. Consequently, we offer robust illustrations to show that the choice of estimator matters for improved US inflation forecasts. Third, the proposed augmented Phillips curve also outperforms time series models such as autoregressive integrated moving average and the fractionally integrated version for both in‐sample and out‐of‐sample forecasts. Our results show that augmenting the traditional Phillips curve with the urban coffee price will produce better forecast results for US inflation only when the statistical effects are captured in the estimation process. Our results are robust to alternative measures of inflation, different data frequencies, higher order moments, multiple data samples and multiple forecast horizons.  相似文献   

6.
The vector multiplicative error model (vector MEM) is capable of analyzing and forecasting multidimensional non‐negative valued processes. Usually its parameters are estimated by generalized method of moments (GMM) and maximum likelihood (ML) methods. However, the estimations could be heavily affected by outliers. To overcome this problem, in this paper an alternative approach, the weighted empirical likelihood (WEL) method, is proposed. This method uses moment conditions as constraints and the outliers are detected automatically by performing a k‐means clustering on Oja depth values of innovations. The performance of WEL is evaluated against those of GMM and ML methods through extensive simulations, in which three different kinds of additive outliers are considered. Moreover, the robustness of WEL is demonstrated by comparing the volatility forecasts of the three methods on 10‐minute returns of the S&P 500 index. The results from both the simulations and the S&P 500 volatility forecasts have shown preferences in using the WEL method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

7.
This paper introduces the idea of adjusting forecasts from a linear time series model where the adjustment relies on the assumption that this linear model is an approximation of a nonlinear time series model. This way of creating forecasts could be convenient when inference for a nonlinear model is impossible, complicated or unreliable in small samples. The size of the forecast adjustment can be based on the estimation results for the linear model and on other data properties such as the first few moments or autocorrelations. An illustration is given for a first‐order diagonal bilinear time series model, which in certain properties can be approximated by a linear ARMA(1, 1) model. For this case, the forecast adjustment is easy to derive, which is convenient as the particular bilinear model is indeed cumbersome to analyze in practice. An application to a range of inflation series for low‐income countries shows that such adjustment can lead to some improved forecasts, although the gain is small for this particular bilinear time series model.  相似文献   

8.
Value‐at‐risk (VaR) forecasting generally relies on a parametric density function of portfolio returns that ignores higher moments or assumes them constant. In this paper, we propose a simple approach to forecasting of a portfolio VaR. We employ the Gram‐Charlier expansion (GCE) augmenting the standard normal distribution with the first four moments, which are allowed to vary over time. In an extensive empirical study, we compare the GCE approach to other models of VaR forecasting and conclude that it provides accurate and robust estimates of the realized VaR. In spite of its simplicity, on our dataset GCE outperforms other estimates that are generated by both constant and time‐varying higher‐moments models. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

9.
This paper investigates the forecasting performance of the Garch (1, 1) model when estimated with NINE different error distributions on Standard and Poor's 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of volatility from intra‐day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

10.
We compare linear autoregressive (AR) models and self‐exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two‐regime SETAR process is used as the data‐generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non‐linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

11.
This paper examines the forecasting ability of the nonlinear specifications of the market model. We propose a conditional two‐moment market model with a time‐varying systematic covariance (beta) risk in the form of a mean reverting process of the state‐space model via the Kalman filter algorithm. In addition, we account for the systematic component of co‐skewness and co‐kurtosis by considering higher moments. The analysis is implemented using data from the stock indices of several developed and emerging stock markets. The empirical findings favour the time‐varying market model approaches, which outperform linear model specifications both in terms of model fit and predictability. Precisely, higher moments are necessary for datasets that involve structural changes and/or market inefficiencies which are common in most of the emerging stock markets. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
13.
In this paper, we investigate the time series properties of S&P 100 volatility and the forecasting performance of different volatility models. We consider several nonparametric and parametric volatility measures, such as implied, realized and model‐based volatility, and show that these volatility processes exhibit an extremely slow mean‐reverting behavior and possible long memory. For this reason, we explicitly model the near‐unit root behavior of volatility and construct median unbiased forecasts by approximating the finite‐sample forecast distribution using bootstrap methods. Furthermore, we produce prediction intervals for the next‐period implied volatility that provide important information about the uncertainty surrounding the point forecasts. Finally, we apply intercept corrections to forecasts from misspecified models which dramatically improve the accuracy of the volatility forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

14.
This paper examines the predictive relationship of consumption‐related and news‐related Google Trends data to changes in private consumption in the USA. The results suggest that (1) Google Trends‐augmented models provide additional information about consumption over and above survey‐based consumer sentiment indicators, (2) consumption‐related Google Trends data provide information about pre‐consumption research trends, (3) news‐related Google Trends data provide information about changes in durable goods consumption, and (4) the combination of news and consumption‐related data significantly improves forecasting models. We demonstrate that applying these insights improves forecasts of private consumption growth over forecasts that do not utilize Google Trends data and over forecasts that use Google Trends data, but do not take into account the specific ways in which it informs forecasts.  相似文献   

15.
In order to provide short‐run forecasts of headline and core HICP inflation for France, we assess the forecasting performance of a large set of economic indicators, individually and jointly, as well as using dynamic factor models. We run out‐of‐sample forecasts implementing the Stock and Watson (1999) methodology. We find that, according to usual statistical criteria, the combination of several indicators—in particular those derived from surveys—provides better results than factor models, even after pre‐selection of the variables included in the panel. However, factors included in VAR models exhibit more stable forecasting performance over time. Results for the HICP excluding unprocessed food and energy are very encouraging. Moreover, we show that the aggregation of forecasts on subcomponents exhibits the best performance for projecting total inflation and that it is robust to data snooping. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

16.
This paper assesses the informational content of alternative realized volatility estimators, daily range and implied volatility in multi‐period out‐of‐sample Value‐at‐Risk (VaR) predictions. We use the recently proposed Realized GARCH model combined with the skewed Student's t distribution for the innovations process and a Monte Carlo simulation approach in order to produce the multi‐period VaR estimates. Our empirical findings, based on the S&P 500 stock index, indicate that almost all realized and implied volatility measures can produce statistically and regulatory precise VaR forecasts across forecasting horizons, with the implied volatility being especially accurate in monthly VaR forecasts. The daily range produces inferior forecasting results in terms of regulatory accuracy and Basel II compliance. However, robust realized volatility measures, which are immune against microstructure noise bias or price jumps, generate superior VaR estimates in terms of capital efficiency, as they minimize the opportunity cost of capital and the Basel II regulatory capital. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
Multifractal models have recently been introduced as a new type of data‐generating process for asset returns and other financial data. Here we propose an adaptation of this model for realized volatility. We estimate this new model via generalized method of moments and perform forecasting by means of best linear forecasts derived via the Levinson–Durbin algorithm. Its out‐of‐sample performance is compared against other popular time series specifications. Using an intra‐day dataset for five major international stock market indices, we find that the the multifractal model for realized volatility improves upon forecasts of its earlier counterparts based on daily returns and of many other volatility models. While the more traditional RV‐ARFIMA model comes out as the most successful model (in terms of the number of cases in which it has the best forecasts for all combinations of forecast horizons and evaluation criteria), the new model performs often significantly better during the turbulent times of the recent financial crisis. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
This study compares the performance of two forecasting models of the 10‐year Treasury rate: a random walk (RW) model and an augmented‐autoregressive (A‐A) model which utilizes the information in the expected inflation rate. For 1993–2008, the RW and A‐A forecasts (with different lead times and forecast horizons) are generally unbiased and accurately predict directional change under symmetric loss. However, the A‐A forecasts outperform the RW, suggesting that the expected inflation rate (as a leading indicator) helps improve forecast accuracy. This finding is important since bond market efficiency implies that the RW forecasts are optimal and cannot be improved. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
We use real‐time macroeconomic variables and combination forecasts with both time‐varying weights and equal weights to forecast inflation in the USA. The combination forecasts compare three sets of commonly used time‐varying coefficient autoregressive models: Gaussian distributed errors, errors with stochastic volatility, and errors with moving average stochastic volatility. Both point forecasts and density forecasts suggest that models combined by equal weights do not produce worse forecasts than those with time‐varying weights. We also find that variable selection, the allowance of time‐varying lag length choice, and the stochastic volatility specification significantly improve forecast performance over standard benchmarks. Finally, when compared with the Survey of Professional Forecasters, the results of the best combination model are found to be highly competitive during the 2007/08 financial crisis.  相似文献   

20.
We show that contrasting results on trading volume's predictive role for short‐horizon reversals in stock returns can be reconciled by conditioning on different investor types' trading. Using unique trading data by investor type from Korea, we provide explicit evidence of three distinct mechanisms leading to contrasting outcomes: (i) informed buying—price increases accompanied by high institutional buying volume are less likely to reverse; (ii) liquidity selling—price declines accompanied by high institutional selling volume in institutional investor habitat are more likely to reverse; (iii) attention‐driven speculative buying—price increases accompanied by high individual buying‐volume in individual investor habitat are more likely to reverse. Our approach to predict which mechanism will prevail improves reversal forecasts following return shocks: An augmented contrarian strategy utilizing our ex ante formulation increases short‐horizon reversal strategy profitability by 40–70% in the US and Korean stock markets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号