首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Derivation of prediction intervals in the k-variable regression model is problematic when future-period values of exogenous variables are not known with certainty. Even in the most favourable case when the forecasts of the exogenous variables are jointly normal, the distribution of the forecast error is non-normal, and thus traditional asymptotic normal theory does not apply. This paper presents an alternative bootstrap method. In contrast to the traditional predictor of the future value of the endogenous variable, which is known to be inconsistent, the bootstrap predictor converges weakly to the true value. Monte Carlo results show that the bootstrap prediction intervals can achieve approximately nominal coverage.  相似文献   

2.
The aim of the paper is to examine the performance of bootstrap and asymptotic parametric inference methods in structural VAR analysis. The results obtained through a Monte Carlo experiment suggest that the two approaches are largely equivalent in most, but not all, cases. While the asymptotic method turns out to be surprisingly robust with respect to the distribution of the errors, the bootstrap does deliver results superior in terms of both length of the confidence interval and coverage when highly non-linear statistics (such as the components of the variance of the forecast error) are considered.  相似文献   

3.
    
This paper uses Markov switching models to capture volatility dynamics in exchange rates and to evaluate their forecasting ability. We identify that increased volatilities in four euro‐based exchange rates are due to underlying structural changes. Also, we find that currencies are closely related to each other, especially in high‐volatility periods, where cross‐correlations increase significantly. Using Markov switching Monte Carlo approach we provide evidence in favour of Markov switching models, rejecting random walk hypothesis. Testing in‐sample and out‐of‐sample Markov trading rules based on Dueker and Neely (Journal of Banking and Finance, 2007) we find that using econometric methodology is able to forecast accurately exchange rate movements. When applied to the Euro/US dollar and the euro/British pound daily returns data, the model provides exceptional out‐of‐sample returns. However, when applied to the euro/Brazilian real and the euro/Mexican peso, the model loses power. Higher volatility exercised in the Latin American currencies seems to be a critical factor for this failure. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

4.
    
Many applications in science involve finding estimates of unobserved variables from observed data, by combining model predictions with observations. The sequential Monte Carlo (SMC) is a well‐established technique for estimating the distribution of unobserved variables that are conditional on current observations. While the SMC is very successful at estimating the first central moments, estimating the extreme quantiles of a distribution via the current SMC methods is computationally very expensive. The purpose of this paper is to develop a new framework using probability distortion. We use an SMC with distorted weights in order to make computationally efficient inferences about tail probabilities of future interest rates using the Cox–Ingersoll–Ross (CIR) model, as well as with an observed yield curve. We show that the proposed method yields acceptable estimates about tail quantiles at a fraction of the computational cost of the full Monte Carlo.  相似文献   

5.
    
At what forecast horizon is one time series more predictable than another? This paper applies the Diebold–Kilian conditional predictability measure to assess the out‐of‐sample performance of three alternative models of daily GBP/USD and DEM/USD exchange rate returns. Predictability is defined as a non‐linear statistic of a model's relative expected losses at short and long forecast horizons, allowing flexible choice of both the estimation procedure and loss function. The long horizon is set to 2 weeks and one month ahead and forecasts evaluated according to MSE loss. Bootstrap methodology is used to estimate the data's conditional predictability using GARCH models. This is then compared to predictability under a random walk and a model using the prediction bias in uncovered interest parity (UIP). We find that both exchange rates are less predictable using GARCH than using a random walk, but they are more predictable using UIP than a random walk. Predictability using GARCH is relatively higher for the 2‐weeks‐than for the 1‐month long forecast horizon. Comparing the results using a random walk to that using UIP reveals ‘pockets’ of predictability, that is, particular short horizons for which predictability using the random walk exceeds that using UIP, or vice versa. Overall, GBP/USD returns appear more predictable than DEM/USD returns at short horizons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

6.
Building on recent and growing evidence that geographic location influences information diffusion, this paper examines the relation between firm's location and the predictability of stock returns. We hypothesize that returns on a portfolio composed of firms located in central areas are more likely to follow a random walk than returns on a portfolio composed of firms located in remote areas. Using a battery of variance ratio tests, we find strong and robust support for our prediction. In particular, we show that the returns on a portfolio composed of the 500 largest urban firms follow a random walk; however, all variance ratio tests reject the random walk hypothesis for a portfolio that includes the 500 largest rural firms. Our results are robust to alternative definitions of firm's location and portfolio formation.  相似文献   

7.
This study examines the problem of forecasting an aggregate of cointegrated disaggregates. It first establishes conditions under which forecasts of an aggregate variable obtained from a disaggregate VECM will be equal to those from an aggregate, univariate time series model, and develops a simple procedure for testing those conditions. The paper then uses Monte Carlo simulations to show, for a finite sample, that the proposed test has good size and power properties and that whether a model satisfies the aggregation conditions is closely related to out‐of‐sample forecast performance. The paper then shows that ignoring cointegration and specifying the disaggregate model as a VAR in differences can significantly affect analyses of aggregation, with the VAR‐based test for aggregation possibly leading to faulty inference and the differenced VAR forecasts potentially understating the benefits of disaggregate information. Finally, analysis of an empirical problem confirms the basic results. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

8.
    
The aim of this paper is to compare the forecasting performance of competing threshold models, in order to capture the asymmetric effect in the volatility. We focus on examining the relative out‐of‐sample forecasting ability of the SETAR‐Threshold GARCH (SETAR‐TGARCH) and the SETAR‐Threshold Stochastic Volatility (SETAR‐THSV) models compared to the GARCH model and Stochastic Volatility (SV) model. However, the main problem in evaluating the predictive ability of volatility models is that the ‘true’ underlying volatility process is not observable and thus a proxy must be defined for the unobservable volatility. For the class of nonlinear state space models (SETAR‐THSV and SV), a modified version of the SIR algorithm has been used to estimate the unknown parameters. The forecasting performance of competing models has been compared for two return time series: IBEX 35 and S&P 500. We explore whether the increase in the complexity of the model implies that its forecasting ability improves. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

9.
We shall first review some non-normal stationary first-order autoregressive models. The models are constructed with a given marginal distribution (logistic, hyperbolic secant, exponential, Laplace, or gamma) and the requirement that the bivariate joint distribution of the generated process must be sufficiently simple so that the parameter estimation and forecasting problems of the models can be addressed. A model-building approach that consists of model identification, estimation, diagnostic checking, and forecasting is then discussed for this class of models.  相似文献   

10.
    
We consider a Bayesian model averaging approach for the purpose of forecasting Swedish consumer price index inflation using a large set of potential indicators, comprising some 80 quarterly time series covering a wide spectrum of Swedish economic activity. The paper demonstrates how to efficiently and systematically evaluate (almost) all possible models that these indicators in combination can give rise to. The results, in terms of out‐of‐sample performance, suggest that Bayesian model averaging is a useful alternative to other forecasting procedures, in particular recognizing the flexibility by which new information can be incorporated. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
We analyse the forecasting attributes of trenc and diffence-stationary representations of the U.S. macroeconomic time series sudied by Nelson and Plosser (1982). Predictive densities based on models estimated for these series (which terminate in 1970) are compared with subsequent realizations compiled by Schotman and van Dijk (1991) which terminate in (1988). Predictive densities obtained using the, extended series are also derived to assess the impact of the subsequent realization on long-range forecasts. Of particular interest are comparisons of the average intervals of predictive densities corresponding to the competing specifications In general, we find that coverage intervals based on diference-stationary specifications are far wider than those based or. trend-stationary specifications for the real series, and slightly wider for the nominal series. This additional width is often a virtue in forecasting nuninal series over the 1971-1988 period, as the inflation experienced durnig this time was unprecedented in the 1900s. However, the evolution of the real series has been relatively stable in the 1900s, hence the uncertainty associated with difference-stationary specifications generally seems excessive for these data.  相似文献   

12.
  总被引:1,自引:0,他引:1  
  相似文献   

13.
This paper concentrates on comparing estimation and forecasting ability of quasi‐maximum likelihood (QML) and support vector machines (SVM) for financial data. The financial series are fitted into a family of asymmetric power ARCH (APARCH) models. As the skewness and kurtosis are common characteristics of the financial series, a skew‐t distributed innovation is assumed to model the fat tail and asymmetry. Prior research indicates that the QML estimator for the APARCH model is inefficient when the data distribution shows departure from normality, so the current paper utilizes the semi‐parametric‐based SVM method and shows that it is more efficient than the QML under the skewed Student's‐t distributed error. As the SVM is a kernel‐based technique, we further investigate its performance by applying separately a Gaussian kernel and a wavelet kernel. The results suggest that the SVM‐based method generally performs better than QML for both in‐sample and out‐of‐sample data. The outcomes also highlight the fact that the wavelet kernel outperforms the Gaussian kernel with lower forecasting error, better generation capability and more computation efficiency. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
    
Singular spectrum analysis (SSA) is a powerful nonparametric method in the area of time series analysis that has shown its capability in different applications areas. SSA depends on two main choices: the window length L and the number of eigentriples used for grouping r. One of the most important issues when analyzing time series is the forecast of new observations. When using SSA for time series forecasting there are several alternative algorithms, the most widely used being the recurrent forecasting model, which assumes that a given observation can be written as a linear combination of the L?1 previous observations. However, when the window length L is large, the forecasting model is unlikely to be parsimonious. In this paper we propose a new parsimonious recurrent forecasting model that uses an optimal m(<L?1) coefficients in the linear combination of the recurrent SSA. Our results support the idea of using this new parsimonious recurrent forecasting model instead of the standard recurrent SSA forecasting model.  相似文献   

15.
为了提高反舰导弹的作战能力,需要对导弹的航迹进行合理规划。在对反舰导弹的弹道特点和导引方法进行认真分析的基础上,建立了方案弹道、导引弹道、反舰导弹六自由度闭环数学模型,在VC环境下调用OpenGL图形程序接口,开发了用于进行弹道规划的三维可视化仿真系统。通过对某型反舰导弹弹道进行验证,仿真结果表明本系统能较好地满足要求。  相似文献   

16.
    
We investigate the predictive performance of various classes of value‐at‐risk (VaR) models in several dimensions—unfiltered versus filtered VaR models, parametric versus nonparametric distributions, conventional versus extreme value distributions, and quantile regression versus inverting the conditional distribution function. By using the reality check test of White (2000), we compare the predictive power of alternative VaR models in terms of the empirical coverage probability and the predictive quantile loss for the stock markets of five Asian economies that suffered from the 1997–1998 financial crisis. The results based on these two criteria are largely compatible and indicate some empirical regularities of risk forecasts. The Riskmetrics model behaves reasonably well in tranquil periods, while some extreme value theory (EVT)‐based models do better in the crisis period. Filtering often appears to be useful for some models, particularly for the EVT models, though it could be harmful for some other models. The CaViaR quantile regression models of Engle and Manganelli (2004) have shown some success in predicting the VaR risk measure for various periods, generally more stable than those that invert a distribution function. Overall, the forecasting performance of the VaR models considered varies over the three periods before, during and after the crisis. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

17.
In the presence of fallible data, standard estimation and forecasting techniques are biased and inconsistent. Surprisingly, the magnitude of this bias tends to increase, and not diminish, in time series applications as more observations become available. A solution to this ever-present problem, Stein-rule least squares (SRLS), is offered. It corrects for the bias and inconsistency of traditional estimators and provides a means for significantly improving the predictive accuracy of regression-based forecasting techniques. A Monte Carlo study of the forecasting accuracy of SRLS, compared to its alternatives reveals its practical significance and small sample behaviour.  相似文献   

18.
    
We aim to assess the ability of two alternative forecasting procedures to predict quarterly national account (QNA) aggregates. The application of Box–Jenkins techniques to observed data constitutes the basis of traditional ARIMA and transfer function methods (BJ methods). The alternative procedure exploits the information of unobserved high‐ and low‐frequency components of time series (UC methods). An informal examination of empirical evidence suggests that the relationships between QNA aggregates and coincident indicators are often clearly different for diverse frequencies. Under these circumstances, a Monte Carlo experiment shows that UC methods significantly improve the forecasting accuracy of BJ procedures if coincident indicators play an important role in such predictions. Otherwise (i.e., under univariate procedures), BJ methods tend to be more accurate than the UC alternative, although the differences are small. We illustrate these findings with several applications from the Spanish economy with regard to industrial production, private consumption, business investment and exports. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

19.
    
Bootstrap in time series models is not straightforward to implement, as in this case the observations are not independent. One of the alternatives is to bootstrap the residuals in order to obtain the bootstrap series and thus use these series for inference purposes. This work deals with the problem of assessing the accuracy of hyperparameters in structural models. We study the simplest case, the local level model, where the hyperparameters are given by the variances of the disturbance terms. As their distribution is not known, we employ the bootstrap to approximate the true distribution, using parametric and non‐parametric approaches. Bootstrap standard deviations are computed and their performances compared to the asymptotic and empirical standard errors, calculated using a Monte Carlo simulation. We also build confidence intervals to the hyperparameters, using four bootstrap methods and the results are compared by means of the length, shape and coverage probabilities of the intervals. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

20.
    
This paper investigates the effects of imposing invalid cointegration restrictions or ignoring valid ones on the estimation, testing and forecasting properties of the bivariate, first‐order, vector autoregressive (VAR(1)) model. We first consider nearly cointegrated VARs, that is, stable systems whose largest root, lmax, lies in the neighborhood of unity, while the other root, lmin, is safely smaller than unity. In this context, we define the ‘forecast cost of type I’ to be the deterioration in the forecasting accuracy of the VAR model due to the imposition of invalid cointegration restrictions. However, there are cases where misspecification arises for the opposite reasons, namely from ignoring cointegration when the true process is, in fact, cointegrated. Such cases can arise when lmax equals unity and lmin is less than but near to unity. The effects of this type of misspecification on forecasting will be referred to as ‘forecast cost of type II’. By means of Monte Carlo simulations, we measure both types of forecast cost in actual situations, where the researcher is led (or misled) by the usual unit root tests in choosing the unit root structure of the system. We consider VAR(1) processes driven by i.i.d. Gaussian or GARCH innovations. To distinguish between the effects of nonlinear dependence and those of leptokurtosis, we also consider processes driven by i.i.d. t(2) innovations. The simulation results reveal that the forecast cost of imposing invalid cointegration restrictions is substantial, especially for small samples. On the other hand, the forecast cost of ignoring valid cointegration restrictions is small but not negligible. In all the cases considered, both types of forecast cost increase with the intensity of GARCH effects. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号