首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we present results of a simulation study to assess and compare the accuracy of forecasting techniques for long‐memory processes in small sample sizes. We analyse differences between adaptive ARMA(1,1) L‐step forecasts, where the parameters are estimated by minimizing the sum of squares of L‐step forecast errors, and forecasts obtained by using long‐memory models. We compare widths of the forecast intervals for both methods, and discuss some computational issues associated with the ARMA(1,1) method. Our results illustrate the importance and usefulness of long‐memory models for multi‐step forecasting. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

2.
We consider one parametric and five semiparametric approaches to estimate D in SARFIMA (0, D, 0)s processes, that is, when the process is a fractionally integrated ARMA model with seasonality s. We also consider h‐step‐ahead forecasting for these processes. We present the proof of some features of this model and also a study based on a Monte Carlo simulation for different sample sizes and different seasonal periods. We compare the different estimation procedures analyzing the bias, the mean squared error values, and the confidence intervals for the estimators. We also consider three different methods to choose the total number of regressors in the regression analysis for the semiparametric class of estimation procedures. We apply the methodology to the Nile River flow monthly data, and also to a simulated seasonal fractionally integrated time series. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

3.
We develop an ordinary least squares estimator of the long‐memory parameter from a fractionally integrated process that is an alternative to the Geweke and Porter‐Hudak (1983) estimator. Using the wavelet transform from a fractionally integrated process, we establish a log‐linear relationship between the wavelet coefficients' variance and the scaling parameter equal to the log‐memory parameter. This log‐linear relationship yields a consistent ordinary least squares estimator of the long‐memory parameter when the wavelet coefficients' population variance is replaced by their sample variance. We derive the small sample bias and variance of the ordinary least squares estimator and test it against the GPH estimator and the McCoy–Walden maximum likelihood wavelet estimator by conducting a number of Monte Carlo experiments. Based upon the criterion of choosing the estimator which minimizes the mean squared error, the wavelet OLS approach was superior to the GPH estimator, but inferior to the McCoy–Walden wavelet estimator for the processes simulated. However, given the simplicity of programming and running the wavelet OLS estimator and its statistical inference of the long‐memory parameter we feel the general practitioner will be attracted to the wavelet OLS estimator. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

4.
Exploring the Granger‐causation relationship is an important and interesting topic in the field of econometrics. In the traditional model we usually apply the short‐memory style to exhibit the relationship, but in practice there could be other different influence patterns. Besides the short‐memory relationship, Chen (2006) demonstrates a long‐memory relationship, in which a useful approach is provided for estimation where the time series are not necessarily fractionally co‐integrated. In that paper two different relationships (short‐memory and long‐memory relationship) are regarded whereby the influence flow is decayed by geometric, or cutting off, or harmonic sequences. However, it limits the model to the stationary relationship. This paper extends the influence flow to a non‐stationary relationship where the limitation is on ?0.5 ≤ d ≤ 1.0 and it can be used to detect whether the influence decays off (?0.5 ≤ d < 0.5) or is permanent (0.5 ≤ d ≤ 1.0). Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
Several studies have tested for long‐range dependence in macroeconomic and financial time series but very few have assessed the usefulness of long‐memory models as forecast‐generating mechanisms. This study tests for fractional differencing in the US monetary indices (simple sum and divisia) and compares the out‐of‐sample fractional forecasts to benchmark forecasts. The long‐memory parameter is estimated using Robinson's Gaussian semi‐parametric and multivariate log‐periodogram methods. The evidence amply suggests that the monetary series possess a fractional order between one and two. Fractional out‐of‐sample forecasts are consistently more accurate (with the exception of the M3 series) than benchmark autoregressive forecasts but the forecasting gains are not generally statistically significant. In terms of forecast encompassing, the fractional model encompasses the autoregressive model for the divisia series but neither model encompasses the other for the simple sum series. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

6.
The issues of non‐stationarity and long memory of real interest rates are examined here. Autoregressive models allowing short‐term mean reversion are compared with fractional integration models in terms of their ability to explain the behaviour of the data and to forecast out‐of‐sample. The data used are weekly observations of 3‐month Eurodeposit rates for 10 countries, adjusted for inflation, for 14 years. Following Brenner, Harjes and Kroner, the volatility of these rates is shown to both exhibit GARCH effects and depend on the level of interest rates. Although relatively little support is found for the hypothesis of mean reversion, evidence of long memory in interest rate changes is found for seven countries. The out‐of‐sample forecasting performance for a year ahead of the fractional integrated models was significantly better than a no change. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

7.
In 1979 Efron proposed a new general statistical procedure known as ‘Bootstrap’, a computer-intensive method used when finite-sample theory is impossible or difficult to derive, or when only asymptotic theory is available. It is recommended in the estimation of measures of both location and scale for any statistical model without making any distributional assumptions about the data. This technique has been successfully used in various applied statistical problems, although not many applications have been reported in the area of time series. In this paper we present a new application of Bootstrap to time series. We consider a simulation study where artificial time series corresponding to AR(1), AR(2), MA(1), MA(2) and ARMA(1, 1) structures were generated, covering important regions of the parameter space of each one of them. The conventional Box-Jenkins parametric estimators of the parameters are compared with the corresponding non-parametric Bootstrap estimators, obtained by 500 Bootstrap repetitions for each series.  相似文献   

8.
This article studies Man and Tiao's (2006) low‐order autoregressive fractionally integrated moving‐average (ARFIMA) approximation to Tsai and Chan's (2005b) limiting aggregate structure of the long‐memory process. In matching the autocorrelations, we demonstrate that the approximation works well, especially for larger d values. In computing autocorrelations over long lags for larger d value, using the exact formula one might encounter numerical problems. The use of the ARFIMA(0, d, d?1) model provides a useful alternative to compute the autocorrelations as a really close approximation. In forecasting future aggregates, we demonstrate the close performance of using the ARFIMA(0, d, d?1) model and the exact aggregate structure. In practice, this provides a justification for the use of a low‐order ARFIMA model in predicting future aggregates of long‐memory process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

9.
Long‐range persistence in volatility is widely modelled and forecast in terms of the so‐called fractional integrated models. These models are mostly applied in the univariate framework, since the extension to the multivariate context of assets portfolios, while relevant, is not straightforward. We discuss and apply a procedure which is able to forecast the multivariate volatility of a portfolio including assets with long memory. The main advantage of this model is that it is feasible enough to be applied on large‐scale portfolios, solving the problem of dealing with extremely complex likelihood functions which typically arises in this context. An application of this procedure to a portfolio of five daily exchange rate series shows that the out‐of‐sample forecasts for the multivariate volatility are improved under several loss functions when the long‐range dependence property of the portfolio assets is explicitly accounted for. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

10.
We introduce a long‐memory dynamic Tobit model, defining it as a censored version of a fractionally integrated Gaussian ARMA model, which may include seasonal components and/or additional regression variables. Parameter estimation for such a model using standard techniques is typically infeasible, since the model is not Markovian, cannot be expressed in a finite‐dimensional state‐space form, and includes censored observations. Furthermore, the long‐memory property renders a standard Gibbs sampling scheme impractical. Therefore we introduce a new Markov chain Monte Carlo sampling scheme, which is orders of magnitude more efficient than the standard Gibbs sampler. The method is inherently capable of handling missing observations. In case studies, the model is fit to two time series: one consisting of volumes of requests to a hard disk over time, and the other consisting of hourly rainfall measurements in Edinburgh over a 2‐year period. The resulting posterior distributions for the fractional differencing parameter demonstrate, for these two time series, the importance of the long‐memory structure in the models. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper we extend the Baillie and Baltagi ( 1999 ) paper (Prediction from the regression model with one‐way error components. In Analysis of Panels and Limited Dependent Variables Models, Hsiao C, Lahiri K, Lee LF, Pesaran H (eds). Cambridge University Press, Cambridge, UK). In particular, we derive six predictors for the two‐way error components model, as well as their associated asymptotic mean squared error (AMSE) of multi‐step prediction. In addition, we also provide both theoretical and simulation evidence as to the relative efficiency of our six alternative predictors. The adequacy of the prediction AMSE formula is also investigated by the use of Monte Carlo methods which indicate that the ordinary optimal predictors perform well for various accuracy criteria. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

12.
Financial data series are often described as exhibiting two non‐standard time series features. First, variance often changes over time, with alternating phases of high and low volatility. Such behaviour is well captured by ARCH models. Second, long memory may cause a slower decay of the autocorrelation function than would be implied by ARMA models. Fractionally integrated models have been offered as explanations. Recently, the ARFIMA–ARCH model class has been suggested as a way of coping with both phenomena simultaneously. For estimation we implement the bias correction of Cox and Reid ( 1987 ). For daily data on the Swiss 1‐month Euromarket interest rate during the period 1986–1989, the ARFIMA–ARCH (5,d,2/4) model with non‐integer d is selected by AIC. Model‐based out‐of‐sample forecasts for the mean are better than predictions based on conditionally homoscedastic white noise only for longer horizons (τ > 40). Regarding volatility forecasts, however, the selected ARFIMA–ARCH models dominate. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper we deal with the prediction theory of long-memory time series. The purpose is to derive a general theory of the convergence of moments of the nonlinear least squares estimator so as to evaluate the asymptotic prediction mean squared error (PMSE). The asymptotic PMSE of two predictors is evaluated. The first is defined by the estimator of the differencing parameter, while the second is defined by a fixed differencing parameter: in other words, a parametric predictor of the seasonal autoregressive integrated moving average model. The effects of misspecifying the differencing parameter is a long-memory model are clarified by the asymptotic results relating to the PMSE. The finite sample behaviour of the predictor and the model selection in terms of PMSE of the two predictors are examined using simulation, and the source of any differences in behaviour made clear in terms of asymptotic theory. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
This paper compares the forecast performance of vector‐autoregression‐type (VAR) demand systems with and without imposing the homogeneity restriction in the cointegration space. US meat consumption (beef, poultry and pork) data are studied. One up to four‐steps‐ahead forecasts are generated from both the theoretically restricted and unrestricted models. A modified Diebold–Mariano test of the equality of mean squared forecast errors (MSFE) and a forecast encompassing test are applied in forecast evaluation. Our findings suggest that the imposition of the homogeneity restriction tends to improve the forecast accuracy when the restriction is not rejected. The evidence is mixed when the restriction is rejected. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

16.
This study examines the small‐sample properties of some commonly used tests of equal forecast accuracy. The paper considers the size and power of different tests and the performance of different heteroscedasticity and autocorrelation‐consistent (HAC) variance estimators. Monte Carlo experiments show that the tests all suffer some size distortions in small samples, with the distortions varying across tests. The experiments also show that, adjusted for size distortions, the tests have broadly similar power, although some small differences exist. Finally, the experiments indicate that the size and power performances of HAC estimators vary with the features of the data. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

17.
18.
This article introduces a novel framework for analysing long‐horizon forecasting of the near non‐stationary AR(1) model. Using the local to unity specification of the autoregressive parameter, I derive the asymptotic distributions of long‐horizon forecast errors both for the unrestricted AR(1), estimated using an ordinary least squares (OLS) regression, and for the random walk (RW). I then identify functions, relating local to unity ‘drift’ to forecast horizon, such that OLS and RW forecasts share the same expected square error. OLS forecasts are preferred on one side of these ‘forecasting thresholds’, while RW forecasts are preferred on the other. In addition to explaining the relative performance of forecasts from these two models, these thresholds prove useful in developing model selection criteria that help a forecaster reduce error. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

19.
This paper introduces the idea of adjusting forecasts from a linear time series model where the adjustment relies on the assumption that this linear model is an approximation of a nonlinear time series model. This way of creating forecasts could be convenient when inference for a nonlinear model is impossible, complicated or unreliable in small samples. The size of the forecast adjustment can be based on the estimation results for the linear model and on other data properties such as the first few moments or autocorrelations. An illustration is given for a first‐order diagonal bilinear time series model, which in certain properties can be approximated by a linear ARMA(1, 1) model. For this case, the forecast adjustment is easy to derive, which is convenient as the particular bilinear model is indeed cumbersome to analyze in practice. An application to a range of inflation series for low‐income countries shows that such adjustment can lead to some improved forecasts, although the gain is small for this particular bilinear time series model.  相似文献   

20.
大规模地震发生后,在时间有限且信息缺乏的情况下,以大样本为建模基础构建的传统预测模型难以实现对应急救援物资的快速准确预测.基于此,以灰色建模理论为基础,提出并构建一种基于新陈代谢的GM(1,1)灰色动态预测模型,利用其“去掉老信息,利用新信息”的建模特性,分析并预测地震灾区死亡人数的动态变化,进而预测相应地应急救援物资需求.将该模型应用于预测我国青海玉树地震灾区物资需求的数量,取得了较好的预测效果.研究成果对丰富与完善灰色预测模型的理论体系,实现大规模地震灾区应急救援物资需求的预测,具有重要的理论意义和应用价值.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号