首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 101 毫秒
1.
This paper investigates the forecasting performance of the Garch (1, 1) model when estimated with NINE different error distributions on Standard and Poor's 500 Index Future returns. By utilizing the theory of realized variance to construct an appropriate ex post measure of volatility from intra‐day data it is shown that allowing for a leptokurtic error distribution leads to significant improvements in variance forecasts compared to using the normal distribution. This result holds for daily, weekly as well as monthly forecast horizons. It is also found that allowing for skewness and time variation in the higher moments of the distribution does not further improve forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
This study examines the forecasting accuracy of alternative vector autoregressive models each in a seven‐variable system that comprises in turn of daily, weekly and monthly foreign exchange (FX) spot rates. The vector autoregressions (VARs) are in non‐stationary, stationary and error‐correction forms and are estimated using OLS. The imposition of Bayesian priors in the OLS estimations also allowed us to obtain another set of results. We find that there is some tendency for the Bayesian estimation method to generate superior forecast measures relatively to the OLS method. This result holds whether or not the data sets contain outliers. Also, the best forecasts under the non‐stationary specification outperformed those of the stationary and error‐correction specifications, particularly at long forecast horizons, while the best forecasts under the stationary and error‐correction specifications are generally similar. The findings for the OLS forecasts are consistent with recent simulation results. The predictive ability of the VARs is very weak. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
The track record of a 20‐year history of density forecasts of state tax revenue in Iowa is studied, and potential improvements sought through a search for better‐performing ‘priors’ similar to that conducted three decades ago for point forecasts by Doan, Litterman and Sims (Econometric Reviews, 1984). Comparisons of the point and density forecasts produced under the flat prior are made to those produced by the traditional (mixed estimation) ‘Bayesian VAR’ methods of Doan, Litterman and Sims, as well as to fully Bayesian ‘Minnesota Prior’ forecasts. The actual record and, to a somewhat lesser extent, the record of the alternative procedures studied in pseudo‐real‐time forecasting experiments, share a characteristic: subsequently realized revenues are in the lower tails of the predicted distributions ‘too often’. An alternative empirically based prior is found by working directly on the probability distribution for the vector autoregression parameters—the goal being to discover a better‐performing entropically tilted prior that minimizes out‐of‐sample mean squared error subject to a Kullback–Leibler divergence constraint that the new prior not differ ‘too much’ from the original. We also study the closely related topic of robust prediction appropriate for situations of ambiguity. Robust ‘priors’ are competitive in out‐of‐sample forecasting; despite the freedom afforded the entropically tilted prior, it does not perform better than the simple alternatives. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
We study the performance of recently developed linear regression models for interval data when it comes to forecasting the uncertainty surrounding future stock returns. These interval data models use easy‐to‐compute daily return intervals during the modeling, estimation and forecasting stage. They have to stand up to comparable point‐data models of the well‐known capital asset pricing model type—which employ single daily returns based on successive closing prices and might allow for GARCH effects—in a comprehensive out‐of‐sample forecasting competition. The latter comprises roughly 1000 daily observations on all 30 stocks that constitute the DAX, Germany's main stock index, for a period covering both the calm market phase before and the more turbulent times during the recent financial crisis. The interval data models clearly outperform simple random walk benchmarks as well as the point‐data competitors in the great majority of cases. This result does not only hold when one‐day‐ahead forecasts of the conditional variance are considered, but is even more evident when the focus is on forecasting the width or the exact location of the next day's return interval. Regression models based on interval arithmetic thus prove to be a promising alternative to established point‐data volatility forecasting tools. Copyright ©2015 John Wiley & Sons, Ltd.  相似文献   

5.
The use of linear error correction models based on stationarity and cointegration analysis, typically estimated with least squares regression, is a common technique for financial time series prediction. In this paper, the same formulation is extended to a nonlinear error correction model using the idea of a kernel‐based implicit nonlinear mapping to a high‐dimensional feature space in which linear model formulations are specified. Practical expressions for the nonlinear regression are obtained in terms of the positive definite kernel function by solving a linear system. The nonlinear least squares support vector machine model is designed within the Bayesian evidence framework that allows us to find appropriate trade‐offs between model complexity and in‐sample model accuracy. From straightforward primal–dual reasoning, the Bayesian framework allows us to derive error bars on the prediction in a similar way as for linear models and to perform hyperparameter and input selection. Starting from the results of the linear modelling analysis, the Bayesian kernel‐based prediction is successfully applied to out‐of‐sample prediction of an aggregated equity price index for the European chemical sector. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

6.
We present a system for combining the different types of predictions given by a wide category of mechanical trading rules through statistical learning methods (boosting, and several model averaging methods like Bayesian or simple averaging methods). Statistical learning methods supply better out‐of‐sample results than most of the single moving average rules in the NYSE Composite Index from January 1993 to December 2002. Moreover, using a filter to reduce trading frequency, the filtered boosting model produces a technical strategy which, although it is not able to overcome the returns of the buy‐and‐hold (B&H) strategy during rising periods, it does overcome the B&H during falling periods and is able to absorb a considerable part of falls in the market. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
This paper investigates inference and volatility forecasting using a Markov switching heteroscedastic model with a fat‐tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. The motivation for extending the Markov switching GARCH model, previously developed to capture mean asymmetry, is that the switching variable, assumed to be a first‐order Markov process, is unobserved. The proposed model extends this work to incorporate Markov switching in the mean and variance simultaneously. Parameter estimation and inference are performed in a Bayesian framework via a Markov chain Monte Carlo scheme. We compare competing models using Bayesian forecasting in a comparative value‐at‐risk study. The proposed methods are illustrated using both simulations and eight international stock market return series. The results generally favor the proposed double Markov switching GARCH model with an exogenous variable. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
Time series of categorical data is not a widely studied research topic. Particularly, there is no available work on the Bayesian analysis of categorical time series processes. With the objective of filling that gap, in the present paper we consider the problem of Bayesian analysis including Bayesian forecasting for time series of categorical data, which is modelled by Pegram's mixing operator, applicable for both ordinal and nominal data structures. In particular, we consider Pegram's operator‐based autoregressive process for the analysis. Real datasets on infant sleep status are analysed for illustrations. We also illustrate that the Bayesian forecasting is more accurate than the corresponding frequentist's approach when we intend to forecast a large time gap ahead. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

9.
Bayesian methods for assessing the accuracy of dynamic financial value‐at‐risk (VaR) forecasts have not been considered in the literature. Such methods are proposed in this paper. Specifically, Bayes factor analogues of popular frequentist tests for independence of violations from, and for correct coverage of a time series of, dynamic quantile forecasts are developed. To evaluate the relevant marginal likelihoods, analytic integration methods are utilized when possible; otherwise multivariate adaptive quadrature methods are employed to estimate the required quantities. The usual Bayesian interval estimate for a proportion is also examined in this context. The size and power properties of the proposed methods are examined via a simulation study, illustrating favourable comparisons both overall and with their frequentist counterparts. An empirical study employs the proposed methods, in comparison with standard tests, to assess the adequacy of a range of forecasting models for VaR in several financial market data series. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
Foreign exchange market prediction is attractive and challenging. According to the efficient market and random walk hypotheses, market prices should follow a random walk pattern and thus should not be predictable with more than about 50% accuracy. In this article, we investigate the predictability of foreign exchange spot rates of the US dollar against the British pound to show that not all periods are equally random. We used the Hurst exponent to select a period with great predictability. Parameters for generating training patterns were determined heuristically by auto‐mutual information and false nearest‐neighbor methods. Some inductive machine‐learning classifiers—artificial neural network, decision tree, k‐nearest neighbor, and naïve Bayesian classifier—were then trained with these generated patterns. Through appropriate collaboration of these models, we achieved a prediction accuracy of up to 67%. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
This study presents a method of assessing financial statement fraud risk. The proposed approach comprises a system of financial and non‐financial risk factors, and a hybrid assessment method that combines machine learning methods with a rule‐based system. Experiments are performed using data from Chinese companies by four classifiers (logistic regression, back‐propagation neural network, C5.0 decision tree and support vector machine) and an ensemble of those classifiers. The proposed ensemble of classifiers outperform each of the four classifiers individually in accuracy and composite error rate. The experimental results indicate that non‐financial risk factors and a rule‐based system help decrease the error rates. The proposed approach outperforms machine learning methods in assessing the risk of financial statement fraud. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
We observe that daily highs and lows of stock prices do not diverge over time and, hence, adopt the cointegration concept and the related vector error correction model (VECM) to model the daily high, the daily low, and the associated daily range data. The in‐sample results attest to the importance of incorporating high–low interactions in modeling the range variable. In evaluating the out‐of‐sample forecast performance using both mean‐squared forecast error and direction of change criteria, it is found that the VECM‐based low and high forecasts offer some advantages over alternative forecasts. The VECM‐based range forecasts, on the other hand, do not always dominate—the forecast rankings depend on the choice of evaluation criterion and the variables being forecast. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
We present the results on the comparison of efficiency of approximate Bayesian methods for the analysis and forecasting of non‐Gaussian dynamic processes. A numerical algorithm based on MCMC methods has been developed to carry out the Bayesian analysis of non‐linear time series. Although the MCMC‐based approach is not fast, it allows us to study the efficiency, in predicting future observations, of approximate propagation procedures that, being algebraic, have the practical advantage of being very quick. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

14.
This paper reviews the relations between the methods of seasonal adjustment used by official statistical agencies and the ‘model-based’ methods that postulate explicit stochastic models for the unobserved components of a time series and apply optimal signal extraction theory to obtain a seasonally adjusted series. The Kalman filter implementation of the model-based methods is described and some recent results on its properties are reviewed. The model-based methods employ homogeneous or time-invariant models that assume in particular that the autocovariance structure does not vary with the season. Relaxing this leads to the class of models known as periodic models, and an example of a seasonally heterosceclastic unobserved-components ARIMA (SHUCARIMA) model is presented. The calculation of the standard error of a seasonally adjusted series via the Kalman filter is extended to this periodic model and illustrated for a monthly rainfall series.  相似文献   

15.
This paper proposes a new mixture GARCH model with a dynamic mixture proportion. The mixture Gaussian distribution of the error can vary from time to time. The Bayesian Information Criterion and the EM algorithm are used to estimate the number of parameters as well as the model parameters and their standard errors. The new model is applied to the S&P500 Index and Hang Seng Index and compared with GARCH models with Gaussian error and Student's t error. The result shows that the IGARCH effect in these index returns could be the result of the mixture of one stationary volatility component with another non‐stationary volatility component. The VaR based on the new model performs better than traditional GARCH‐based VaRs, especially in unstable stock markets. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

16.
This research proposes a prediction model of multistage financial distress (MSFD) after considering contextual and methodological issues regarding sampling, feature and model selection criteria. Financial distress is defined as a three‐stage process showing different nature and intensity of financial problems. It is argued that applied definition of distress is independent of legal framework and its predictability would provide more practical solutions. The final sample is selected after industry adjustments and oversampling the data. A wrapper subset data mining approach is applied to extract the most relevant features from financial statement and stock market indicators. An ensemble approach using a combination of DTNB (decision table and naïve base hybrid model), LMT (logistic model tree) and A2DE (alternative N dependence estimator) Bayesian models is used to develop the final prediction model. The performance of all the models is evaluated using a 10‐fold cross‐validation method. Results showed that the proposed model predicted MSFD with 84.06% accuracy. This accuracy increased to 89.57% when a 33.33% cut‐off value was considered. Hence the proposed model is accurate and reliable to identify the true nature and intensity of financial problems regardless of the contextual legal framework.  相似文献   

17.
Ashley (Journal of Forecasting 1983; 2 (3): 211–223) proposes a criterion (known as Ashley's index) to judge whether the external macroeconomic variables are well forecast to serve as explanatory variables in forecasting models, which is crucial for policy makers. In this article, we try to extend Ashley's work by providing three testing procedures, including a ratio‐based test, a difference‐based test, and the Bayesian approach. The Bayesian approach has the advantage of allowing the flexibility of adapting all possible information content within a decision‐making environment such as the change of variable's definition due to the evolving system of national accounts. We demonstrate the proposed methods by applying six macroeconomic forecasts in the Survey of Professional Forecasters. Researchers or practitioners can thus formally test whether the external information is helpful. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
We consider the problem of forecasting a stationary time series when there is an unknown mean break close to the forecast origin. Based on the intercept‐correction methods suggested by Clements and Hendry (1998) and Bewley (2003), a hybrid approach is introduced, where the break and break point are treated in a Bayesian fashion. The hyperparameters of the priors are determined by maximizing the marginal density of the data. The distributions of the proposed forecasts are derived. Different intercept‐correction methods are compared using simulation experiments. Our hybrid approach compares favorably with both the uncorrected and the intercept‐corrected forecasts. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
We compare the predictive ability of Bayesian methods which deal simultaneously with model uncertainty and correlated regressors in the framework of cross‐country growth regressions. In particular, we assess methods with spike and slab priors combined with different prior specifications for the slope parameters in the slab. Our results indicate that moving away from Gaussian g‐priors towards Bayesian ridge, LASSO or elastic net specifications has clear advantages for prediction when dealing with datasets of (potentially highly) correlated regressors, a pervasive characteristic of the data used hitherto in the econometric literature. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
We compare the accuracy of vector autoregressive (VAR), restricted vector autoregressive (RVAR), Bayesian vector autoregressive (BVAR), vector error correction (VEC) and Bayesian error correction (BVEC) models in forecasting the exchange rates of five Central and Eastern European currencies (Czech Koruna, Hungarian Forint, Slovak Koruna, Slovenian Tolar and Polish Zloty) against the US Dollar and the Euro. Although these models tend to outperform the random walk model for long‐term predictions (6 months ahead and beyond), even the best models in terms of average prediction error fail to reject the test of equality of forecasting accuracy against the random walk model in short‐term predictions. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号