首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
    
This paper discusses the Granger causality test by a spectrum estimator which allows the transfer function to have long memory properties. In traditional methodology the relationship among variables is usually assumed to be short memory or contemporaneous. Hence, we have to make sure they are of the same integrated order, else there might be a spurious regression problem. In practice, not all the variables are fractionally co‐integrated in the economic model. They may have the same random resources, but under a different integrated order. This paper focuses on how to capture the long memory Granger causality effect in the transfer function. This does not necessarily assume the variables are of the same fractional integrated order. Moreover, by the transfer function we construct an estimator to test the long memory effect with the Granger causality sense. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

2.
    
The hedging of weather risks has become extremely relevant in recent years, promoting the diffusion of weather‐derivative contracts. The pricing of such contracts requires the development of appropriate models for the prediction of the underlying weather variables. Within this framework, a commonly used specification is the ARFIMA‐GARCH. We provide a generalization of such a model, introducing time‐varying memory coefficients. Our model satisfies the empirical evidence of the changing memory level observed in average temperature series, and provides useful improvements in the forecasting, simulation, and pricing issues related to weather derivatives. We present an application related to the forecast and simulation of a temperature index density, which is then used for the pricing of weather options. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

3.
Various methods based on smoothing or statistical criteria have been used for constructing disaggregated values compatible with observed annual totals. The present method is based on a time‐series model in a state space form and allows for a prescribed multiplicative trend. It is applied to US GNP data which have been used for comparing methods suggested for this purpose. The model can be extended to include quarterly series, related to the unknown disaggregated values. But as the estimation criteria are based on prediction errors of the aggregated values, the estimated form may not be optimal for reproducing high‐frequency variations of the disaggregated values. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

4.
Exploring the Granger‐causation relationship is an important and interesting topic in the field of econometrics. In the traditional model we usually apply the short‐memory style to exhibit the relationship, but in practice there could be other different influence patterns. Besides the short‐memory relationship, Chen (2006) demonstrates a long‐memory relationship, in which a useful approach is provided for estimation where the time series are not necessarily fractionally co‐integrated. In that paper two different relationships (short‐memory and long‐memory relationship) are regarded whereby the influence flow is decayed by geometric, or cutting off, or harmonic sequences. However, it limits the model to the stationary relationship. This paper extends the influence flow to a non‐stationary relationship where the limitation is on ?0.5 ≤ d ≤ 1.0 and it can be used to detect whether the influence decays off (?0.5 ≤ d < 0.5) or is permanent (0.5 ≤ d ≤ 1.0). Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
    
We develop an ordinary least squares estimator of the long‐memory parameter from a fractionally integrated process that is an alternative to the Geweke and Porter‐Hudak (1983) estimator. Using the wavelet transform from a fractionally integrated process, we establish a log‐linear relationship between the wavelet coefficients' variance and the scaling parameter equal to the log‐memory parameter. This log‐linear relationship yields a consistent ordinary least squares estimator of the long‐memory parameter when the wavelet coefficients' population variance is replaced by their sample variance. We derive the small sample bias and variance of the ordinary least squares estimator and test it against the GPH estimator and the McCoy–Walden maximum likelihood wavelet estimator by conducting a number of Monte Carlo experiments. Based upon the criterion of choosing the estimator which minimizes the mean squared error, the wavelet OLS approach was superior to the GPH estimator, but inferior to the McCoy–Walden wavelet estimator for the processes simulated. However, given the simplicity of programming and running the wavelet OLS estimator and its statistical inference of the long‐memory parameter we feel the general practitioner will be attracted to the wavelet OLS estimator. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

6.
    
Modeling online auction prices is a popular research topic among statisticians and marketing analysts. Recent research mainly focuses on two directions: one is the functional data analysis (FDA) approach, in which the price–time relationship is modeled by a smooth curve, and the other is the point process approach, which directly models the arrival process of bidders and bids. In this paper, a novel model for the bid arrival process using a self‐exciting point process (SEPP) is proposed and applied to forecast auction prices. The FDA and point process approaches are linked together by using functional data analysis technique to describe the intensity of the bid arrival point process. Using the SEPP to model the bid arrival process, many stylized facts in online auction data can be captured. We also develop a simulation‐based forecasting procedure using the estimated SEPP intensity and historical bidding increment. In particular, prediction interval for the terminal price of merchandise can be constructed. Applications to eBay auction data of Harry Potter books and Microsoft Xbox show that the SEPP model provides more accurate and more informative forecasting results than traditional methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
    
This paper develops a state space framework for the statistical analysis of a class of locally stationary processes. The proposed Kalman filter approach provides a numerically efficient methodology for estimating and predicting locally stationary models and allows for the handling of missing values. It provides both exact and approximate maximum likelihood estimates. Furthermore, as suggested by the Monte Carlo simulations reported in this work, the performance of the proposed methodology is very good, even for relatively small sample sizes. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
This paper proposes a procedure to make efficient predictions in a nearly non‐stationary process. The method is based on the adaptation of the theory of optimal combination of forecasts to nearly non‐stationary processes. The proposed combination method is simple to apply and has a better performance than classical combination procedures. It also has better average performance than a differenced predictor, a fractional differenced predictor, or an optimal unit‐root pretest predictor. In the case of a process that has a zero mean, only the non‐differenced predictor is slightly better than the proposed combination method. In the general case of a non‐zero mean, the proposed combination method has a better overall performance than all its competitors. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
    
  相似文献   

10.
    
This article introduces a novel framework for analysing long‐horizon forecasting of the near non‐stationary AR(1) model. Using the local to unity specification of the autoregressive parameter, I derive the asymptotic distributions of long‐horizon forecast errors both for the unrestricted AR(1), estimated using an ordinary least squares (OLS) regression, and for the random walk (RW). I then identify functions, relating local to unity ‘drift’ to forecast horizon, such that OLS and RW forecasts share the same expected square error. OLS forecasts are preferred on one side of these ‘forecasting thresholds’, while RW forecasts are preferred on the other. In addition to explaining the relative performance of forecasts from these two models, these thresholds prove useful in developing model selection criteria that help a forecaster reduce error. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
    
We introduce a long‐memory autoregressive conditional Poisson (LMACP) model to model highly persistent time series of counts. The model is applied to forecast quoted bid–ask spreads, a key parameter in stock trading operations. It is shown that the LMACP nicely captures salient features of bid–ask spreads like the strong autocorrelation and discreteness of observations. We discuss theoretical properties of LMACP models and evaluate rolling‐window forecasts of quoted bid–ask spreads for stocks traded at NYSE and NASDAQ. We show that Poisson time series models significantly outperform forecasts from AR, ARMA, ARFIMA, ACD and FIACD models. The economic significance of our results is supported by the evaluation of a trade schedule. Scheduling trades according to spread forecasts we realize cost savings of up to 14 % of spread transaction costs. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
    
To forecast realized volatility, this paper introduces a multiplicative error model that incorporates heterogeneous components: weekly and monthly realized volatility measures. While the model captures the long‐memory property, estimation simply proceeds using quasi‐maximum likelihood estimation. This paper investigates its forecasting ability using the realized kernels of 34 different assets provided by the Oxford‐Man Institute's Realized Library. The model outperforms benchmark models such as ARFIMA, HAR, Log‐HAR and HEAVY‐RM in within‐sample fitting and out‐of‐sample (1‐, 10‐ and 22‐step) forecasts. It performed best in both pointwise and cumulative comparisons of multi‐step‐ahead forecasts, regardless of loss function (QLIKE or MSE). Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
    
Let {Xt} be a stationary process with spectral density g(λ).It is often that the true structure g(λ) is not completely specified. This paper discusses the problem of misspecified prediction when a conjectured spectral density fθ(λ), θ∈Θ, is fitted to g(λ). Then, constructing the best linear predictor based on fθ(λ), we can evaluate the prediction error M(θ). Since θ is unknown we estimate it by a quasi‐MLE . The second‐order asymptotic approximation of is given. This result is extended to the case when Xt contains some trend, i.e. a time series regression model. These results are very general. Furthermore we evaluate the second‐order asymptotic approximation of for a time series regression model having a long‐memory residual process with the true spectral density g(λ). Since the general formulae of the approximated prediction error are complicated, we provide some numerical examples. Then we illuminate unexpected effects from the misspecification of spectra. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper we consider some of the prominent methods that are available in the literature for the problem of disaggregating annual time-series data to quarterly figures. The procedures are briefly described and illustrated through a real data set. The performances of the methods are compared in a Monte Carlo study. The results indicate that the complicated model-based procedure is usually superior to other non-model-based alternatives in the large sample situations. Based on the simulation results, we make some recommendations regarding the use of these methods.  相似文献   

15.
We provide an overview of the papers contained in this Special Issue of the Journal of Forecasting and also discuss some new models for analysing financial time series that have recently been proposed. These are illustrated by empirical examples using 60 years of daily data on the London Stock Exchange's FT30 index.  相似文献   

16.
    
This paper proposes the use of the bias‐corrected bootstrap for interval forecasting of an autoregressive time series with an arbitrary number of deterministic components. We use the bias‐corrected bootstrap based on two alternative bias‐correction methods: the bootstrap and an analytic formula based on asymptotic expansion. We also propose a new stationarity‐correction method, based on stable spectral factorization, as an alternative to Kilian's method exclusively used in past studies. A Monte Carlo experiment is conducted to compare small‐sample properties of prediction intervals. The results show that the bias‐corrected bootstrap prediction intervals proposed in this paper exhibit desirable small‐sample properties. It is also found that the bootstrap bias‐corrected prediction intervals based on stable spectral factorization are tighter and more stable than those based on Kilian's stationarity‐correction. The proposed methods are applied to interval forecasting for the number of tourist arrivals in Hong Kong. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

17.
    
This paper stresses the restrictive nature of the standard unit root/cointegration assumptions and examines a more general type of time heterogeneity, which might characterize a number of economic variables, and which results in parameter time dependence and misleading statistical inference. We show that in such cases ‘operational’ models cannot be obtained, and the estimation of time‐varying parameter models becomes necessary. For instance, economic processes subject to endemic change can only be adequately modelled in a state space form. This is a very important point, because unstable models will break down when used for forecasting purposes. We also discuss a new test for the null of cointegration developed by Quintos and Phillips (1993), which is based on parameter constancy in cointegrating regressions. Finally, we point out that, if it is possible to condition on a subset of superexogenous variables, parameter instability can be handled by estimating a restricted system. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

18.
    
Use of monthly data for economic forecasting purposes is typically constrained by the absence of monthly estimates of GDP. Such data can be interpolated but are then prone to measurement error. However, the variance matrix of the measurement errors is typically known. We present a technique for estimating a VAR on monthly data, making use of interpolated estimates of GDP and correcting for the impact of measurement error. We then address the question how to establish whether the model estimated from the interpolated monthly data contains information absent from the analogous quarterly VAR. The techniques are illustrated using a bivariate VAR modelling GDP growth and inflation. It is found that, using inflation data adjusted to remove seasonal effects and the impacts of changes to indirect taxes, the monthly model has little to add to a quarterly model when projecting one quarter ahead. However, the monthly model has an important role to play in building up a picture of the current quarter once one or two months' hard data becomes available. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

19.
    
Fractionally integrated models with the disturbances following a Bloomfield ( 1973 ) exponential spectral model are proposed in this article for modelling UK unemployment. This gives us a better understanding of the low‐frequency dynamics affecting the series without relying on any particular ARMA specification for its short‐run components which, in general, require many more parameters to estimate. The results indicate that this exponential model, confounded with fractional integration, may be a feasible way of modelling unemployment. It also shows that its order of integration is much higher than one and thus leads to the conclusion that the standard practice of taking first differences may lead to erroneous results. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

20.
    
In this paper we present an extensive study of annual GNP data for five European countries. We look for intercountry dependence and analyse how the different economies interact, using several univariate ARIMA and unobserved components models and a multivariate model for the GNP incorporating all the common information among the variables. We use a dynamic factor model to take account of the common dynamic structure of the variables. This common dynamic structure can be non‐stationary (i.e. common trends) or stationary (i.e. common cycles). Comparisons of the models are made in terms of the root mean square error (RMSE) for one‐step‐ahead forecasts. For this particular group of European countries, the factor model outperforms the remaining ones. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号