共查询到20条相似文献,搜索用时 15 毫秒
1.
Many applications in science involve finding estimates of unobserved variables from observed data, by combining model predictions with observations. The sequential Monte Carlo (SMC) is a well‐established technique for estimating the distribution of unobserved variables that are conditional on current observations. While the SMC is very successful at estimating the first central moments, estimating the extreme quantiles of a distribution via the current SMC methods is computationally very expensive. The purpose of this paper is to develop a new framework using probability distortion. We use an SMC with distorted weights in order to make computationally efficient inferences about tail probabilities of future interest rates using the Cox–Ingersoll–Ross (CIR) model, as well as with an observed yield curve. We show that the proposed method yields acceptable estimates about tail quantiles at a fraction of the computational cost of the full Monte Carlo. 相似文献
2.
This article introduces a new model to capture simultaneously the mean and variance asymmetries in time series. Threshold non‐linearity is incorporated into the mean and variance specifications of a stochastic volatility model. Bayesian methods are adopted for parameter estimation. Forecasts of volatility and Value‐at‐Risk can also be obtained by sampling from suitable predictive distributions. Simulations demonstrate that the apparent variance asymmetry documented in the literature can be due to the neglect of mean asymmetry. Strong evidence of the mean and variance asymmetries was detected in US and Hong Kong data. Asymmetry in the variance persistence was also discovered in the Hong Kong stock market. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
3.
This paper investigates the effects of imposing invalid cointegration restrictions or ignoring valid ones on the estimation, testing and forecasting properties of the bivariate, first‐order, vector autoregressive (VAR(1)) model. We first consider nearly cointegrated VARs, that is, stable systems whose largest root, lmax, lies in the neighborhood of unity, while the other root, lmin, is safely smaller than unity. In this context, we define the ‘forecast cost of type I’ to be the deterioration in the forecasting accuracy of the VAR model due to the imposition of invalid cointegration restrictions. However, there are cases where misspecification arises for the opposite reasons, namely from ignoring cointegration when the true process is, in fact, cointegrated. Such cases can arise when lmax equals unity and lmin is less than but near to unity. The effects of this type of misspecification on forecasting will be referred to as ‘forecast cost of type II’. By means of Monte Carlo simulations, we measure both types of forecast cost in actual situations, where the researcher is led (or misled) by the usual unit root tests in choosing the unit root structure of the system. We consider VAR(1) processes driven by i.i.d. Gaussian or GARCH innovations. To distinguish between the effects of nonlinear dependence and those of leptokurtosis, we also consider processes driven by i.i.d. t(2) innovations. The simulation results reveal that the forecast cost of imposing invalid cointegration restrictions is substantial, especially for small samples. On the other hand, the forecast cost of ignoring valid cointegration restrictions is small but not negligible. In all the cases considered, both types of forecast cost increase with the intensity of GARCH effects. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
4.
The paper summarizes results of a mail survey of the use of formal forecasting techniques in British manufacturing companies. It appraises the state of awareness of particular techniques and the extent to which they are used in various functional applications. The extent to which the forecasts generated by the techniques influence company action is assessed; and the reasons for the non-use of particular techniques examined. The paper concludes that although an increasing number of companies appreciate the importance of forecasting, the methods used are predominantly naïve and few companies are taking steps to improve the situation through using alternative techniques or through computerizing established techniques. 相似文献
5.
Todd E. Clark 《Journal of forecasting》2000,19(1):1-21
This study examines the problem of forecasting an aggregate of cointegrated disaggregates. It first establishes conditions under which forecasts of an aggregate variable obtained from a disaggregate VECM will be equal to those from an aggregate, univariate time series model, and develops a simple procedure for testing those conditions. The paper then uses Monte Carlo simulations to show, for a finite sample, that the proposed test has good size and power properties and that whether a model satisfies the aggregation conditions is closely related to out‐of‐sample forecast performance. The paper then shows that ignoring cointegration and specifying the disaggregate model as a VAR in differences can significantly affect analyses of aggregation, with the VAR‐based test for aggregation possibly leading to faulty inference and the differenced VAR forecasts potentially understating the benefits of disaggregate information. Finally, analysis of an empirical problem confirms the basic results. Copyright © 2000 John Wiley & Sons, Ltd. 相似文献
6.
A new clustered correlation multivariate generalized autoregressive conditional heteroskedasticity (CC‐MGARCH) model that allows conditional correlations to form clusters is proposed. This model generalizes the time‐varying correlation structure of Tse and Tsui (2002, Journal of Business and Economic Statistics 20 : 351–361) by classifying the correlations among the series into groups. To estimate the proposed model, Markov chain Monte Carlo methods are adopted. Two efficient sampling schemes for drawing discrete indicators are also developed. Simulations show that these efficient sampling schemes can lead to substantial savings in computation time in Monte Carlo procedures involving discrete indicators. Empirical examples using stock market and exchange rate data are presented in which two‐cluster and three‐cluster models are selected using posterior probabilities. This implies that the conditional correlation equation is likely to be governed by more than one set of decaying parameters. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
7.
In this study, we verify the existence of predictability in the Brazilian equity market. Unlike other studies in the same sense, which evaluate original series for each stock, we evaluate synthetic series created on the basis of linear models of stocks. Following the approach of Burgess (Computational Finance, 1999; 99, 297–312), we use the ‘stepwise regression’ model for the formation of models of each stock. We then use the variance ratio profile together with a Monte Carlo simulation for the selection of models with potential predictability using data from 1 April 1999 to 30 December 2003. Unlike the approach of Burgess, we carry out White's Reality Check (Econometrica, 2000; 68, 1097–1126) in order to verify the existence of positive returns for the period outside the sample from 2 January 2004 to 28 August 2007. We use the strategies proposed by Sullivan, Timmermann and White (Journal of Finance, 1999; 54, 1647–1691) and Hsu and Kuan (Journal of Financial Econometrics, 2005; 3, 606–628) amounting to 26,410 simulated strategies. Finally, using the bootstrap methodology, with 1000 simulations, we find strong evidence of predictability in the models, including transaction costs. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
8.
C. H. Sim 《Journal of forecasting》1994,13(4):369-381
We shall first review some non-normal stationary first-order autoregressive models. The models are constructed with a given marginal distribution (logistic, hyperbolic secant, exponential, Laplace, or gamma) and the requirement that the bivariate joint distribution of the generated process must be sufficiently simple so that the parameter estimation and forecasting problems of the models can be addressed. A model-building approach that consists of model identification, estimation, diagnostic checking, and forecasting is then discussed for this class of models. 相似文献
9.
In this paper we investigate the forecast performance of nonlinear error‐correction models with regime switching. In particular, we focus on threshold and Markov switching error‐correction models, where adjustment towards long‐run equilibrium is nonlinear and discontinuous. Our simulation study reveals that the gains from using a correctly specified nonlinear model can be considerable, especially if disequilibrium adjustment is strong and/or the magnitude of parameter changes is relatively large. Copyright © 2005 John Wiley & Sons, Ltd. 相似文献
10.
B. D. McCullough 《Journal of forecasting》1996,15(4):293-304
Derivation of prediction intervals in the k-variable regression model is problematic when future-period values of exogenous variables are not known with certainty. Even in the most favourable case when the forecasts of the exogenous variables are jointly normal, the distribution of the forecast error is non-normal, and thus traditional asymptotic normal theory does not apply. This paper presents an alternative bootstrap method. In contrast to the traditional predictor of the future value of the endogenous variable, which is known to be inconsistent, the bootstrap predictor converges weakly to the true value. Monte Carlo results show that the bootstrap prediction intervals can achieve approximately nominal coverage. 相似文献
11.
We analyse the forecasting attributes of trenc and diffence-stationary representations of the U.S. macroeconomic time series sudied by Nelson and Plosser (1982). Predictive densities based on models estimated for these series (which terminate in 1970) are compared with subsequent realizations compiled by Schotman and van Dijk (1991) which terminate in (1988). Predictive densities obtained using the, extended series are also derived to assess the impact of the subsequent realization on long-range forecasts. Of particular interest are comparisons of the average intervals of predictive densities corresponding to the competing specifications In general, we find that coverage intervals based on diference-stationary specifications are far wider than those based or. trend-stationary specifications for the real series, and slightly wider for the nominal series. This additional width is often a virtue in forecasting nuninal series over the 1971-1988 period, as the inflation experienced durnig this time was unprecedented in the 1900s. However, the evolution of the real series has been relatively stable in the 1900s, hence the uncertainty associated with difference-stationary specifications generally seems excessive for these data. 相似文献
12.
This paper uses Markov switching models to capture volatility dynamics in exchange rates and to evaluate their forecasting ability. We identify that increased volatilities in four euro‐based exchange rates are due to underlying structural changes. Also, we find that currencies are closely related to each other, especially in high‐volatility periods, where cross‐correlations increase significantly. Using Markov switching Monte Carlo approach we provide evidence in favour of Markov switching models, rejecting random walk hypothesis. Testing in‐sample and out‐of‐sample Markov trading rules based on Dueker and Neely (Journal of Banking and Finance, 2007) we find that using econometric methodology is able to forecast accurately exchange rate movements. When applied to the Euro/US dollar and the euro/British pound daily returns data, the model provides exceptional out‐of‐sample returns. However, when applied to the euro/Brazilian real and the euro/Mexican peso, the model loses power. Higher volatility exercised in the Latin American currencies seems to be a critical factor for this failure. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
13.
We consider a Bayesian model averaging approach for the purpose of forecasting Swedish consumer price index inflation using a large set of potential indicators, comprising some 80 quarterly time series covering a wide spectrum of Swedish economic activity. The paper demonstrates how to efficiently and systematically evaluate (almost) all possible models that these indicators in combination can give rise to. The results, in terms of out‐of‐sample performance, suggest that Bayesian model averaging is a useful alternative to other forecasting procedures, in particular recognizing the flexibility by which new information can be incorporated. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
14.
In their seminal book Time Series Analysis: Forecasting and Control, Box and Jenkins (1976) introduce the Airline model, which is still routinely used for the modelling of economic seasonal time series. The Airline model is for a differenced time series (in levels and seasons) and constitutes a linear moving average of lagged Gaussian disturbances which depends on two coefficients and a fixed variance. In this paper a novel approach to seasonal adjustment is developed that is based on the Airline model and that accounts for outliers and breaks in time series. For this purpose we consider the canonical representation of the Airline model. It takes the model as a sum of trend, seasonal and irregular (unobserved) components which are uniquely identified as a result of the canonical decomposition. The resulting unobserved components time series model is extended by components that allow for outliers and breaks. When all components depend on Gaussian disturbances, the model can be cast in state space form and the Kalman filter can compute the exact log‐likelihood function. Related filtering and smoothing algorithms can be used to compute minimum mean squared error estimates of the unobserved components. However, the outlier and break components typically rely on heavy‐tailed densities such as the t or the mixture of normals. For this class of non‐Gaussian models, Monte Carlo simulation techniques will be used for estimation, signal extraction and seasonal adjustment. This robust approach to seasonal adjustment allows outliers to be accounted for, while keeping the underlying structures that are currently used to aid reporting of economic time series data. Copyright © 2006 John Wiley & Sons, Ltd. 相似文献
15.
This paper investigates inference and volatility forecasting using a Markov switching heteroscedastic model with a fat‐tailed error distribution to analyze asymmetric effects on both the conditional mean and conditional volatility of financial time series. The motivation for extending the Markov switching GARCH model, previously developed to capture mean asymmetry, is that the switching variable, assumed to be a first‐order Markov process, is unobserved. The proposed model extends this work to incorporate Markov switching in the mean and variance simultaneously. Parameter estimation and inference are performed in a Bayesian framework via a Markov chain Monte Carlo scheme. We compare competing models using Bayesian forecasting in a comparative value‐at‐risk study. The proposed methods are illustrated using both simulations and eight international stock market return series. The results generally favor the proposed double Markov switching GARCH model with an exogenous variable. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
16.
Hui Feng 《Journal of forecasting》2009,28(3):183-193
In this paper we investigate the impact of data revisions on forecasting and model selection procedures. A linear ARMA model and nonlinear SETAR model are considered in this study. Two Canadian macroeconomic time series have been analyzed: the real‐time monetary aggregate M3 (1977–2000) and residential mortgage credit (1975–1998). The forecasting method we use is multi‐step‐ahead non‐adaptive forecasting. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
17.
Yushu Li 《Journal of forecasting》2014,33(4):259-269
This paper concentrates on comparing estimation and forecasting ability of quasi‐maximum likelihood (QML) and support vector machines (SVM) for financial data. The financial series are fitted into a family of asymmetric power ARCH (APARCH) models. As the skewness and kurtosis are common characteristics of the financial series, a skew‐t distributed innovation is assumed to model the fat tail and asymmetry. Prior research indicates that the QML estimator for the APARCH model is inefficient when the data distribution shows departure from normality, so the current paper utilizes the semi‐parametric‐based SVM method and shows that it is more efficient than the QML under the skewed Student's‐t distributed error. As the SVM is a kernel‐based technique, we further investigate its performance by applying separately a Gaussian kernel and a wavelet kernel. The results suggest that the SVM‐based method generally performs better than QML for both in‐sample and out‐of‐sample data. The outcomes also highlight the fact that the wavelet kernel outperforms the Gaussian kernel with lower forecasting error, better generation capability and more computation efficiency. Copyright © 2014 John Wiley & Sons, Ltd. 相似文献
18.
Pilar Gonzlez 《Journal of forecasting》1992,11(4):271-281
Given a structural time-series model specified at a basic time interval, this paper deals with the problems of forecasting efficiency and estimation accuracy generated when the data are collected at a timing interval which is a multiple of the time unit chosen to build the basic model. Results are presented for the simplest structural models, the trend plus error models, under the assumption that the parameters of the model are known. It is shown that the gains in forecasting efficiency and estimation accuracy for having data at finer intervals are considerable for both stock and flow variables with only one exception. No gain in forecasting efficiency is achieved in the case of a stock series that follows a random walk. 相似文献
19.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd. 相似文献
20.
Harald Hruschka 《Journal of forecasting》2013,32(5):423-434
Focusing on the interdependence of product categories we analyze multicategory buying decisions of households by a finite mixture of multivariate Tobit‐2 models with two response variables: purchase incidence and expenditure. Mixture components can be interpreted as household segments. Correlations for purchases of different categories turn out to be much more important than correlations among expenditures as well as correlations among purchases and expenditures of different categories. About 18% of all pairwise purchase correlations are significant. We compare the best‐performing large‐scale model with 28 categories to four small‐scale models each with seven categories. In our empirical study the large‐scale model clearly attains a better forecasting performance. The small‐scale models provide several biased correlations and miss about 50% of the significant correlations which the large scale model detects. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献