首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
We propose a solution to select promising subsets of autoregressive time series models for further consideration which follows up on the idea of the stochastic search variable selection procedure in George and McCulloch (1993). It is based on a Bayesian approach which is unconditional on the initial terms. The autoregression stepup is in the form of a hierarchical normal mixture model, where latent variables are used to identify the subset choice. The framework of our procedure is utilized by the Gibbs sampler, a Markov chain Monte Carlo method. The advantage of the method presented is computational: it is an alternative way to search over a potentially large set of possible subsets. The proposed method is illustrated with a simulated data as well as a real data. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

2.
We analyze multicategory purchases of households by means of heterogeneous multivariate probit models that relate to partitions formed from a total of 25 product categories. We investigate both prior and post hoc partitions. We search model structures by a stochastic algorithm and estimate models by Markov chain Monte Carlo simulation. The best model in terms of cross‐validated log‐likelihood refers to a post hoc partition with two groups; the second‐best model considers all categories as one group. Among prior partitions with at least two category groups a five‐group model performs best. Effects on average basket value differ for the model with five prior category groups from those for the best‐performing model in 40% and 24% of the investigated categories for features and displays, respectively. In addition, the model with five prior category groups also underestimates total sales revenue across all categories by about 28%. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
We propose in this paper a threshold nonlinearity test for financial time series. Our approach adopts reversible‐jump Markov chain Monte Carlo methods to calculate the posterior probabilities of two competitive models, namely GARCH and threshold GARCH models. Posterior evidence favouring the threshold GARCH model indicates threshold nonlinearity or volatility asymmetry. Simulation experiments demonstrate that our method works very well in distinguishing GARCH and threshold GARCH models. Sensitivity analysis shows that our method is robust to misspecification in error distribution. In the application to 10 market indexes, clear evidence of threshold nonlinearity is discovered and thus supporting volatility asymmetry. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
We use real‐time macroeconomic variables and combination forecasts with both time‐varying weights and equal weights to forecast inflation in the USA. The combination forecasts compare three sets of commonly used time‐varying coefficient autoregressive models: Gaussian distributed errors, errors with stochastic volatility, and errors with moving average stochastic volatility. Both point forecasts and density forecasts suggest that models combined by equal weights do not produce worse forecasts than those with time‐varying weights. We also find that variable selection, the allowance of time‐varying lag length choice, and the stochastic volatility specification significantly improve forecast performance over standard benchmarks. Finally, when compared with the Survey of Professional Forecasters, the results of the best combination model are found to be highly competitive during the 2007/08 financial crisis.  相似文献   

5.
We consider the problem of online prediction when it is uncertain what the best prediction model to use is. We develop a method called dynamic latent class model averaging, which combines a state‐space model for the parameters of each of the candidate models of the system with a Markov chain model for the best model. We propose a polychotomous regression model for the transition weights to assume that the probability of a change in time depends on the past through the values of the most recent time periods and spatial correlation among the regions. The evolution of the parameters in each submodel is defined by exponential forgetting. This structure allows the ‘correct’ model to vary over both time and regions. In contrast to existing methods, the proposed model naturally incorporates clustering and prediction analysis in a single unified framework. We develop an efficient Gibbs algorithm for computation, and we demonstrate the value of our framework on simulated experiments and on a real‐world problem: forecasting IBM's corporate revenue. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
Value‐at‐risk (VaR) forecasting via a computational Bayesian framework is considered. A range of parametric models is compared, including standard, threshold nonlinear and Markov switching generalized autoregressive conditional heteroskedasticity (GARCH) specifications, plus standard and nonlinear stochastic volatility models, most considering four error probability distributions: Gaussian, Student‐t, skewed‐t and generalized error distribution. Adaptive Markov chain Monte Carlo methods are employed in estimation and forecasting. A portfolio of four Asia–Pacific stock markets is considered. Two forecasting periods are evaluated in light of the recent global financial crisis. Results reveal that: (i) GARCH models outperformed stochastic volatility models in almost all cases; (ii) asymmetric volatility models were clearly favoured pre crisis, while at the 1% level during and post crisis, for a 1‐day horizon, models with skewed‐t errors ranked best, while integrated GARCH models were favoured at the 5% level; (iii) all models forecast VaR less accurately and anti‐conservatively post crisis. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
Stochastic covariance models have been explored in recent research to model the interdependence of assets in financial time series. The approach uses a single stochastic model to capture such interdependence. However, it may be inappropriate to assume a single coherence structure at all time t. In this paper, we propose the use of a mixture of stochastic covariance models to generalize the approach and offer greater flexibility in real data applications. Parameter estimation is performed by Bayesian analysis with Markov chain Monte Carlo sampling schemes. We conduct a simulation study on three different model setups and evaluate the performance of estimation and model selection. We also apply our modeling methods to high‐frequency stock data from Hong Kong. Model selection favors a mixture rather than non‐mixture model. In a real data study, we demonstrate that the mixture model is able to identify structural changes in market risk, as evidenced by a drastic change in mixture proportions over time. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
Based on the standard genetic programming (GP) paradigm, we introduce a new probability measure of time series' predictability. It is computed as a ratio of two fitness values (SSE) from GP runs. One value belongs to a subject series, while the other belongs to the same series after it is randomly shuffled. Theoretically, the boundaries of the measure are between zero and 100, where zero characterizes stochastic processes while 100 typifies predictable ones. To evaluate its performance, we first apply it to experimental data. It is then applied to eight Dow Jones stock returns. This measure may reduce model search space and produce more reliable forecast models. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
This paper presents gamma stochastic volatility models and investigates its distributional and time series properties. The parameter estimators obtained by the method of moments are shown analytically to be consistent and asymptotically normal. The simulation results indicate that the estimators behave well. The in‐sample analysis shows that return models with gamma autoregressive stochastic volatility processes capture the leptokurtic nature of return distributions and the slowly decaying autocorrelation functions of squared stock index returns for the USA and UK. In comparison with GARCH and EGARCH models, the gamma autoregressive model picks up the persistence in volatility for the US and UK index returns but not the volatility persistence for the Canadian and Japanese index returns. The out‐of‐sample analysis indicates that the gamma autoregressive model has a superior volatility forecasting performance compared to GARCH and EGARCH models. Copyright © 2006 John Wiley _ Sons, Ltd.  相似文献   

10.
This paper proposes a parsimonious threshold stochastic volatility (SV) model for financial asset returns. Instead of imposing a threshold value on the dynamics of the latent volatility process of the SV model, we assume that the innovation of the mean equation follows a threshold distribution in which the mean innovation switches between two regimes. In our model, the threshold is treated as an unknown parameter. We show that the proposed threshold SV model can not only capture the time‐varying volatility of returns, but can also accommodate the asymmetric shape of conditional distribution of the returns. Parameter estimation is carried out by using Markov chain Monte Carlo methods. For model selection and volatility forecast, an auxiliary particle filter technique is employed to approximate the filter and prediction distributions of the returns. Several experiments are conducted to assess the robustness of the proposed model and estimation methods. In the empirical study, we apply our threshold SV model to three return time series. The empirical analysis results show that the threshold parameter has a non‐zero value and the mean innovations belong to two separately distinct regimes. We also find that the model with an unknown threshold parameter value consistently outperforms the model with a known threshold parameter value. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper we investigate the applicability of several continuous-time stochastic models to forecasting inflation rates with horizons out to 20 years. While the models are well known, new methods of parameter estimation and forecasts are supplied, leading to rigorous testing of out-of-sample inflation forecasting at short and long time horizons. Using US consumer price index data we find that over longer forecasting horizons—that is, those beyond 5 years—the log-normal index model having Ornstein–Uhlenbeck drift rate provides the best forecasts.  相似文献   

12.
The asymptotic Lyapunov stability of one quasi-integrable Hamiltonian system with time-delayed feedback control is studied by using Lyapunov functions and stochastic averaging method. First, a quasi-integrable Hamiltonian system with time-delayed feedback control subjected to Gaussian white noise excitation is approximated by a quasi-integrable Hamiltonian system without time delay. Then, stochastic averaging method for quasi-integrable Hamiltonian system is used to reduce the dimension of the original syst...  相似文献   

13.
14.
The stochastic averaging method for quasi-integrable Hamiltonian systems with time-delayed feedback bang-bang control is first introduced.Then,two time delay compensation methods,namely the method of changing control force amplitude (CFA) and the method of changing control delay time (CDT),are proposed.The conditions applicable to each compensation method are discussed.Finally,an example is worked out in detail to illustrate the application and effectiveness of the proposed methods and the two compensation ...  相似文献   

15.
Financial market time series exhibit high degrees of non‐linear variability, and frequently have fractal properties. When the fractal dimension of a time series is non‐integer, this is associated with two features: (1) inhomogeneity—extreme fluctuations at irregular intervals, and (2) scaling symmetries—proportionality relationships between fluctuations over different separation distances. In multivariate systems such as financial markets, fractality is stochastic rather than deterministic, and generally originates as a result of multiplicative interactions. Volatility diffusion models with multiple stochastic factors can generate fractal structures. In some cases, such as exchange rates, the underlying structural equation also gives rise to fractality. Fractal principles can be used to develop forecasting algorithms. The forecasting method that yields the best results here is the state transition‐fitted residual scale ratio (ST‐FRSR) model. A state transition model is used to predict the conditional probability of extreme events. Ratios of rates of change at proximate separation distances are used to parameterize the scaling symmetries. Forecasting experiments are run using intraday exchange rate futures contracts measured at 15‐minute intervals. The overall forecast error is reduced on average by up to 7% and in one instance by nearly a quarter. However, the forecast error during the outlying events is reduced by 39% to 57%. The ST‐FRSR reduces the predictive error primarily by capturing extreme fluctuations more accurately. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

16.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
The purpose of this paper is to present the results of retrospective tests of various extrapolative methods to forecast adult mortality and very elderly populations for Australia. Direct extrapolation methods tested include the Geometric method, Ediev variant, Lee‐Carter method, BMS variant and a relational model. Indirect methods include the extrapolation of parameters of models fitted to the age profile of death rates and a new method involving the extrapolation of features of death frequency distributions namely the modal age and concentration. The geometric, Ediev and Lee‐ Carter BMS methods were very successful in projecting death rates and very elderly populations. Differences between these methods were small. The extrapolation of parametric functions proved successful for males but less so for females. Very elderly populations can be viably projected by directly extrapolating death rates by age when rates of decline in death rates show consistent relationships between ages and are stable over time. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
The purpose of this paper is to simultaneously investigate several important issues that feature the dynamic and stochastic behavior of beta coefficients for individual stocks and affect the forecasting of stock returns. The issues include randomness, nonstantionarity, and shifts in the mean and variance parameters of the beta coefficient, and are addressed within the framework of variable-mean-response (VMR) random coefficients models in which the problem of heteroscedasticity is present. Estimation is done using a four-step generalized least squares method. The hypotheses concerning randomness and nonstationarity of betas are tested. The time paths, sizes, and marginal rates of mean shifts are determined. The issue of variance shift is examined on the basis of five special tests, called T*, B, S', G and W. Then the impacts of the dynamic and stochastic instability on the estimation of betas is tested by a nonparametric procedure. Finally, the VMR models' ability of forecasting stock returns is evaluated against the standard capital asset pricing model. The empirical findings shed new light on the continuing debate as to whether the beta coefficient is random and nonstationary and have important implications for modeling and forecasting the measurement of performance and the determination of stock returns.  相似文献   

19.
The aim of this paper is to compare the forecasting performance of competing threshold models, in order to capture the asymmetric effect in the volatility. We focus on examining the relative out‐of‐sample forecasting ability of the SETAR‐Threshold GARCH (SETAR‐TGARCH) and the SETAR‐Threshold Stochastic Volatility (SETAR‐THSV) models compared to the GARCH model and Stochastic Volatility (SV) model. However, the main problem in evaluating the predictive ability of volatility models is that the ‘true’ underlying volatility process is not observable and thus a proxy must be defined for the unobservable volatility. For the class of nonlinear state space models (SETAR‐THSV and SV), a modified version of the SIR algorithm has been used to estimate the unknown parameters. The forecasting performance of competing models has been compared for two return time series: IBEX 35 and S&P 500. We explore whether the increase in the complexity of the model implies that its forecasting ability improves. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

20.
This paper studies the performance of GARCH model and its modifications, using the rate of returns from the daily stock market indices of the Kuala Lumpur Stock Exchange (KLSE) including Composite Index, Tins Index, Plantations Index, Properties Index, and Finance Index. The models are stationary GARCH, unconstrained GARCH, non‐negative GARCH, GARCH‐M, exponential GARCH and integrated GARCH. The parameters of these models and variance processes are estimated jointly using the maximum likelihood method. The performance of the within‐sample estimation is diagnosed using several goodness‐of‐fit statistics. We observed that, among the models, even though exponential GARCH is not the best model in the goodness‐of‐fit statistics, it performs best in describing the often‐observed skewness in stock market indices and in out‐of‐sample (one‐step‐ahead) forecasting. The integrated GARCH, on the other hand, is the poorest model in both respects. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号