首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper addresses several questions surrounding volatility forecasting and its use in the estimation of optimal hedging ratios. Specifically: Are there economic gains by nesting time‐series econometric models (GARCH) and dynamic programming models (therefore forecasting volatility several periods out) in the estimation of hedging ratios whilst accounting for volatility in the futures bid–ask spread? Are the forecasted hedging ratios (and wealth generated) from the nested bid–ask model statistically and economically different than standard approaches? Are there times when a trader following a basic model that does not forecast outperforms a trader using the nested bid–ask model? On all counts the results are encouraging—a trader that accounts for the bid–ask spread and forecasts volatility several periods in the nested model will incur lower transactions costs and gain significantly when the market suddenly and abruptly turns. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
We use real‐time macroeconomic variables and combination forecasts with both time‐varying weights and equal weights to forecast inflation in the USA. The combination forecasts compare three sets of commonly used time‐varying coefficient autoregressive models: Gaussian distributed errors, errors with stochastic volatility, and errors with moving average stochastic volatility. Both point forecasts and density forecasts suggest that models combined by equal weights do not produce worse forecasts than those with time‐varying weights. We also find that variable selection, the allowance of time‐varying lag length choice, and the stochastic volatility specification significantly improve forecast performance over standard benchmarks. Finally, when compared with the Survey of Professional Forecasters, the results of the best combination model are found to be highly competitive during the 2007/08 financial crisis.  相似文献   

5.
A short‐term mixed‐frequency model is proposed to estimate and forecast Italian economic activity fortnightly. We introduce a dynamic one‐factor model with three frequencies (quarterly, monthly, and fortnightly) by selecting indicators that show significant coincident and leading properties and are representative of both demand and supply. We conduct an out‐of‐sample forecasting exercise and compare the prediction errors of our model with those of alternative models that do not include fortnightly indicators. We find that high‐frequency indicators significantly improve the real‐time forecasts of Italian gross domestic product (GDP); this result suggests that models exploiting the information available at different lags and frequencies provide forecasting gains beyond those based on monthly variables alone. Moreover, the model provides a new fortnightly indicator of GDP, consistent with the official quarterly series.  相似文献   

6.
Modeling online auction prices is a popular research topic among statisticians and marketing analysts. Recent research mainly focuses on two directions: one is the functional data analysis (FDA) approach, in which the price–time relationship is modeled by a smooth curve, and the other is the point process approach, which directly models the arrival process of bidders and bids. In this paper, a novel model for the bid arrival process using a self‐exciting point process (SEPP) is proposed and applied to forecast auction prices. The FDA and point process approaches are linked together by using functional data analysis technique to describe the intensity of the bid arrival point process. Using the SEPP to model the bid arrival process, many stylized facts in online auction data can be captured. We also develop a simulation‐based forecasting procedure using the estimated SEPP intensity and historical bidding increment. In particular, prediction interval for the terminal price of merchandise can be constructed. Applications to eBay auction data of Harry Potter books and Microsoft Xbox show that the SEPP model provides more accurate and more informative forecasting results than traditional methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
Based on a vector error correction model we produce conditional euro area inflation forecasts. We use real‐time data on M3 and HICP, and include real GPD, the 3‐month EURIBOR and the 10‐year government bond yield as control variables. Real money growth and the term spread enter the system as stationary linear combinations. Missing and outlying values are substituted by model‐based estimates using all available data information. In general, the conditional inflation forecasts are consistent with the European Central Bank's assessment of liquidity conditions for future inflation prospects. The evaluation of inflation forecasts under different monetary scenarios reveals the importance of keeping track of money growth rate in particular at the end of 2005. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

8.
Financial data series are often described as exhibiting two non‐standard time series features. First, variance often changes over time, with alternating phases of high and low volatility. Such behaviour is well captured by ARCH models. Second, long memory may cause a slower decay of the autocorrelation function than would be implied by ARMA models. Fractionally integrated models have been offered as explanations. Recently, the ARFIMA–ARCH model class has been suggested as a way of coping with both phenomena simultaneously. For estimation we implement the bias correction of Cox and Reid ( 1987 ). For daily data on the Swiss 1‐month Euromarket interest rate during the period 1986–1989, the ARFIMA–ARCH (5,d,2/4) model with non‐integer d is selected by AIC. Model‐based out‐of‐sample forecasts for the mean are better than predictions based on conditionally homoscedastic white noise only for longer horizons (τ > 40). Regarding volatility forecasts, however, the selected ARFIMA–ARCH models dominate. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

9.
Forecast combination based on a model selection approach is discussed and evaluated. In addition, a combination approach based on ex ante predictive ability is outlined. The model selection approach which we examine is based on the use of Schwarz (SIC) or the Akaike (AIC) Information Criteria. Monte Carlo experiments based on combination forecasts constructed using possibly (misspecified) models suggest that the SIC offers a potentially useful combination approach, and that further investigation is warranted. For example, combination forecasts from a simple averaging approach MSE‐dominate SIC combination forecasts less than 25% of the time in most cases, while other ‘standard’ combination approaches fare even worse. Alternative combination approaches are also compared by conducting forecasting experiments using nine US macroeconomic variables. In particular, artificial neural networks (ANN), linear models, and professional forecasts are used to form real‐time forecasts of the variables, and it is shown via a series of experiments that SIC, t‐statistic, and averaging combination approaches dominate various other combination approaches. An additional finding is that while ANN models may not MSE‐dominate simpler linear models, combinations of forecasts from these two models outperform either individual forecast, for a subset of the economic variables examined. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
Volatility forecasting remains an active area of research with no current consensus as to the model that provides the most accurate forecasts, though Hansen and Lunde (2005) have argued that in the context of daily exchange rate returns nothing can beat a GARCH(1,1) model. This paper extends that line of research by utilizing intra‐day data and obtaining daily volatility forecasts from a range of models based upon the higher‐frequency data. The volatility forecasts are appraised using four different measures of ‘true’ volatility and further evaluated using regression tests of predictive power, forecast encompassing and forecast combination. Our results show that the daily GARCH(1,1) model is largely inferior to all other models, whereas the intra‐day unadjusted‐data GARCH(1,1) model generally provides superior forecasts compared to all other models. Hence, while it appears that a daily GARCH(1,1) model can be beaten in obtaining accurate daily volatility forecasts, an intra‐day GARCH(1,1) model cannot be. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
This study empirically examines the role of macroeconomic and stock market variables in the dynamic Nelson–Siegel framework with the purpose of fitting and forecasting the term structure of interest rate on the Japanese government bond market. The Nelson–Siegel type models in state‐space framework considerably outperform the benchmark simple time series forecast models such as an AR(1) and a random walk. The yields‐macro model incorporating macroeconomic factors leads to a better in‐sample fit of the term structure than the yields‐only model. The out‐of‐sample predictability of the former for short‐horizon forecasts is superior to the latter for all maturities examined in this study, and for longer horizons the former is still compatible to the latter. Inclusion of macroeconomic factors can dramatically reduce the autocorrelation of forecast errors, which has been a common phenomenon of statistical analysis in previous term structure models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
This paper applies a triple‐choice ordered probit model, corrected for nonstationarity to forecast monetary decisions of the Reserve Bank of Australia. The forecast models incorporate a mix of monthly and quarterly macroeconomic time series. Forecast combination is used as an alternative to one multivariate model to improve accuracy of out‐of‐sample forecasts. This accuracy is evaluated with scoring functions, which are also used to construct adaptive weights for combining probability forecasts. This paper finds that combined forecasts outperform multivariable models. These results are robust to different sample sizes and estimation windows. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper, we examine a relatively novel form of gambling, spread (or index) betting that overlaps with practices in conventional financial markets. In this form of betting, a number of bookmakers quote bid–offer spreads about the result of some future event. Bettors may buy (sell) at the top (bottom) end of a spread. We hypothesize that the existence of an outlying spread may provide uninformed traders with forecasting information that can be used to develop improved trading strategies. Using data from a popular spread betting market in the United Kingdom, we find that the price obtaining at the market mid‐point does indeed provide a better forecast of asset values than that implied in the outlying spread. We further show that this information can be used to develop trading strategies leading to returns that are consistently positive and superior to those from noise trading. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
Several studies have tested for long‐range dependence in macroeconomic and financial time series but very few have assessed the usefulness of long‐memory models as forecast‐generating mechanisms. This study tests for fractional differencing in the US monetary indices (simple sum and divisia) and compares the out‐of‐sample fractional forecasts to benchmark forecasts. The long‐memory parameter is estimated using Robinson's Gaussian semi‐parametric and multivariate log‐periodogram methods. The evidence amply suggests that the monetary series possess a fractional order between one and two. Fractional out‐of‐sample forecasts are consistently more accurate (with the exception of the M3 series) than benchmark autoregressive forecasts but the forecasting gains are not generally statistically significant. In terms of forecast encompassing, the fractional model encompasses the autoregressive model for the divisia series but neither model encompasses the other for the simple sum series. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
We propose a wavelet neural network (neuro‐wavelet) model for the short‐term forecast of stock returns from high‐frequency financial data. The proposed hybrid model combines the capability of wavelets and neural networks to capture non‐stationary nonlinear attributes embedded in financial time series. A comparison study was performed on the predictive power of two econometric models and four recurrent neural network topologies. Several statistical measures were applied to the predictions and standard errors to evaluate the performance of all models. A Jordan net that used as input the coefficients resulting from a non‐decimated wavelet‐based multi‐resolution decomposition of an exogenous signal showed a consistent superior forecasting performance. Reasonable forecasting accuracy for the one‐, three‐ and five step‐ahead horizons was achieved by the proposed model. The procedure used to build the neuro‐wavelet model is reusable and can be applied to any high‐frequency financial series to specify the model characteristics associated with that particular series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

18.
Volatility plays a key role in asset and portfolio management and derivatives pricing. As such, accurate measures and good forecasts of volatility are crucial for the implementation and evaluation of asset and derivative pricing models in addition to trading and hedging strategies. However, whilst GARCH models are able to capture the observed clustering effect in asset price volatility in‐sample, they appear to provide relatively poor out‐of‐sample forecasts. Recent research has suggested that this relative failure of GARCH models arises not from a failure of the model but a failure to specify correctly the ‘true volatility’ measure against which forecasting performance is measured. It is argued that the standard approach of using ex post daily squared returns as the measure of ‘true volatility’ includes a large noisy component. An alternative measure for ‘true volatility’ has therefore been suggested, based upon the cumulative squared returns from intra‐day data. This paper implements that technique and reports that, in a dataset of 17 daily exchange rate series, the GARCH model outperforms smoothing and moving average techniques which have been previously identified as providing superior volatility forecasts. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
Following recent non‐linear extensions of the present‐value model, this paper examines the out‐of‐sample forecast performance of two parametric and two non‐parametric nonlinear models of stock returns. The parametric models include the standard regime switching and the Markov regime switching, whereas the non‐parametric are the nearest‐neighbour and the artificial neural network models. We focused on the US stock market using annual observations spanning the period 1872–1999. Evaluation of forecasts was based on two criteria, namely forecast accuracy and forecast encompassing. In terms of accuracy, the Markov and the artificial neural network models produce at least as accurate forecasts as the other models. In terms of encompassing, the Markov model outperforms all the others. Overall, both criteria suggest that the Markov regime switching model is the most preferable non‐linear empirical extension of the present‐value model for out‐of‐sample stock return forecasting. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
This paper first shows that survey‐based expectations (SBE) outperform standard time series models in US quarterly inflation out‐of‐sample prediction and that the term structure of survey‐based inflation forecasts has predictive power over the path of future inflation changes. It then proposes some empirical explanations for the forecasting success of survey‐based inflation expectations. We show that SBE pool a large amount of heterogeneous information on inflation expectations and react more flexibly and accurately to macro conditions both contemporaneously and dynamically. We illustrate the flexibility of SBE forecasts in the context of the 2008 financial crisis. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号