首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
Financial data series are often described as exhibiting two non‐standard time series features. First, variance often changes over time, with alternating phases of high and low volatility. Such behaviour is well captured by ARCH models. Second, long memory may cause a slower decay of the autocorrelation function than would be implied by ARMA models. Fractionally integrated models have been offered as explanations. Recently, the ARFIMA–ARCH model class has been suggested as a way of coping with both phenomena simultaneously. For estimation we implement the bias correction of Cox and Reid ( 1987 ). For daily data on the Swiss 1‐month Euromarket interest rate during the period 1986–1989, the ARFIMA–ARCH (5,d,2/4) model with non‐integer d is selected by AIC. Model‐based out‐of‐sample forecasts for the mean are better than predictions based on conditionally homoscedastic white noise only for longer horizons (τ > 40). Regarding volatility forecasts, however, the selected ARFIMA–ARCH models dominate. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
A mean square error criterion is proposed in this paper to provide a systematic approach to approximate a long‐memory time series by a short‐memory ARMA(1, 1) process. Analytic expressions are derived to assess the effect of such an approximation. These results are established not only for the pure fractional noise case, but also for a general autoregressive fractional moving average long‐memory time series. Performances of the ARMA(1,1) approximation as compared to using an ARFIMA model are illustrated by both computations and an application to the Nile river series. Results derived in this paper shed light on the forecasting issue of a long‐memory process. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
This paper presents an autoregressive fractionally integrated moving‐average (ARFIMA) model of nominal exchange rates and compares its forecasting capability with the monetary structural models and the random walk model. Monthly observations are used for Canada, France, Germany, Italy, Japan and the United Kingdom for the period of April 1973 through December 1998. The estimation method is Sowell's (1992) exact maximum likelihood estimation. The forecasting accuracy of the long‐memory model is formally compared to the random walk and the monetary models, using the recently developed Harvey, Leybourne and Newbold (1997) test statistics. The results show that the long‐memory model is more efficient than the random walk model in steps‐ahead forecasts beyond 1 month for most currencies and more efficient than the monetary models in multi‐step‐ahead forecasts. This new finding strongly suggests that the long‐memory model of nominal exchange rates be studied as a viable alternative to the conventional models. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
We propose two methods to predict nonstationary long‐memory time series. In the first one we estimate the long‐range dependent parameter d by using tapered data; we then take the nonstationary fractional filter to obtain stationary and short‐memory time series. In the second method, we take successive differences to obtain a stationary but possibly long‐memory time series. For the two methods the forecasts are based on those obtained from the stationary components. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
Most long memory forecasting studies assume that long memory is generated by the fractional difference operator. We argue that the most cited theoretical arguments for the presence of long memory do not imply the fractional difference operator and assess the performance of the autoregressive fractionally integrated moving average (ARFIMA) model when forecasting series with long memory generated by nonfractional models. We find that ARFIMA models dominate in forecast performance regardless of the long memory generating mechanism and forecast horizon. Nonetheless, forecasting uncertainty at the shortest forecast horizon could make short memory models provide suitable forecast performance, particularly for smaller degrees of memory. Additionally, we analyze the forecasting performance of the heterogeneous autoregressive (HAR) model, which imposes restrictions on high-order AR models. We find that the structure imposed by the HAR model produces better short and medium horizon forecasts than unconstrained AR models of the same order. Our results have implications for, among others, climate econometrics and financial econometrics models dealing with long memory series at different forecast horizons.  相似文献   

6.
We investigate the dynamic properties of the realized volatility of five agricultural commodity futures by employing the high‐frequency data from Chinese markets and find that the realized volatility exhibits both long memory and regime switching. To capture these properties simultaneously, we utilize a Markov switching autoregressive fractionally integrated moving average (MS‐ARFIMA) model to forecast the realized volatility by combining the long memory process with regime switching component, and compare its forecast performances with the competing models at various horizons. The full‐sample estimation results show that the dynamics of the realized volatility of agricultural commodity futures are characterized by two levels of long memory: one associated with the low‐volatility regime and the other with the high‐volatility regime, and the probability to stay in the low‐volatility regime is higher than that in the high‐volatility regime. The out‐of‐sample volatility forecast results show that the combination of long memory with switching regimes improves the performance of realized volatility forecast, and the proposed model represents a superior out‐of‐sample realized volatility forecast to the competing models. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
Let {Xt} be a stationary process with spectral density g(λ).It is often that the true structure g(λ) is not completely specified. This paper discusses the problem of misspecified prediction when a conjectured spectral density fθ(λ), θ∈Θ, is fitted to g(λ). Then, constructing the best linear predictor based on fθ(λ), we can evaluate the prediction error M(θ). Since θ is unknown we estimate it by a quasi‐MLE . The second‐order asymptotic approximation of is given. This result is extended to the case when Xt contains some trend, i.e. a time series regression model. These results are very general. Furthermore we evaluate the second‐order asymptotic approximation of for a time series regression model having a long‐memory residual process with the true spectral density g(λ). Since the general formulae of the approximated prediction error are complicated, we provide some numerical examples. Then we illuminate unexpected effects from the misspecification of spectra. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

8.
We consider the linear time‐series model yt=dt+ut(t=1,...,n), where dt is the deterministic trend and ut the stochastic term which follows an AR(1) process; ut=θut−1t, with normal innovations ϵt. Various assumptions about the start‐up will be made. Our main interest lies in the behaviour of the l‐period‐ahead forecast yn+1 near θ=1. Unlike in other studies of the AR(1) unit root process, we do not wish to ask the question whether θ=1 but are concerned with the behaviour of the forecast estimate near and at θ=1. For this purpose we define the sth (s=1,2) order sensitivity measure λl(s) of the forecast yn+1 near θ=1. This measures the sensitivity of the forecast at the unit root. In this study we consider two deterministic trends: dtt and dtttt. The forecast will be the Best Linear Unbiased forecast. We show that, when dtt, the number of observations has no effect on forecast sensitivity. When the deterministic trend is linear, the sensitivity is zero. We also develop a large‐sample procedure to measure the forecast sensitivity when we are uncertain whether to include the linear trend. Our analysis suggests that, depending on the initial conditions, it is better to include a linear trend for reduced sensitivity of the medium‐term forecast. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

9.
To forecast realized volatility, this paper introduces a multiplicative error model that incorporates heterogeneous components: weekly and monthly realized volatility measures. While the model captures the long‐memory property, estimation simply proceeds using quasi‐maximum likelihood estimation. This paper investigates its forecasting ability using the realized kernels of 34 different assets provided by the Oxford‐Man Institute's Realized Library. The model outperforms benchmark models such as ARFIMA, HAR, Log‐HAR and HEAVY‐RM in within‐sample fitting and out‐of‐sample (1‐, 10‐ and 22‐step) forecasts. It performed best in both pointwise and cumulative comparisons of multi‐step‐ahead forecasts, regardless of loss function (QLIKE or MSE). Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
An approach is proposed for obtaining estimates of the basic (disaggregated) series, xi, when only an aggregate series, yt, of k period non-overlapping sums of xi's is available. The approach is based on casting the problem in a dynamic linear model form. Then estimates of xi can be obtained by application of the Kalman filtering techniques. An ad hoc procedure is introduced for deriving a model form for the unobserved basic series from the observed model of the aggregates. An application of this approach to a set of real data is given.  相似文献   

11.
In this paper we present results of a simulation study to assess and compare the accuracy of forecasting techniques for long‐memory processes in small sample sizes. We analyse differences between adaptive ARMA(1,1) L‐step forecasts, where the parameters are estimated by minimizing the sum of squares of L‐step forecast errors, and forecasts obtained by using long‐memory models. We compare widths of the forecast intervals for both methods, and discuss some computational issues associated with the ARMA(1,1) method. Our results illustrate the importance and usefulness of long‐memory models for multi‐step forecasting. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper we show that optimal trading results can be achieved if we can forecast a key summary statistic of future prices. Consider the following optimization problem. Let the return ri (over time i=1, 2, ..., n) for the ith day be given and the investor has to make investment decision di on the ith day with di=1 representing a ‘long' position and di=0 a ‘neutral' position. The investment return is given by rni=1ridicΣn+1i=1didi−1∣, where c is the transaction cost. The mathematical programming problem of choosing d1, ..., dn to maximize r under a given transaction cost c is shown to have an analytic solution, which is a function of a key summary statistic called the largest change before reversal. The largest change before reversal is recommended to be used as an output in a neural network for the generation of trading signals. When neural network forecasting is applied to a dataset of Hang Seng Index Futures Contract traded in Hong Kong, it is shown that forecasting the largest change before reversal outperforms the k‐step‐ahead forecast in achieving higher trading profits. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

13.
Exploring the Granger‐causation relationship is an important and interesting topic in the field of econometrics. In the traditional model we usually apply the short‐memory style to exhibit the relationship, but in practice there could be other different influence patterns. Besides the short‐memory relationship, Chen (2006) demonstrates a long‐memory relationship, in which a useful approach is provided for estimation where the time series are not necessarily fractionally co‐integrated. In that paper two different relationships (short‐memory and long‐memory relationship) are regarded whereby the influence flow is decayed by geometric, or cutting off, or harmonic sequences. However, it limits the model to the stationary relationship. This paper extends the influence flow to a non‐stationary relationship where the limitation is on ?0.5 ≤ d ≤ 1.0 and it can be used to detect whether the influence decays off (?0.5 ≤ d < 0.5) or is permanent (0.5 ≤ d ≤ 1.0). Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
We introduce a long‐memory autoregressive conditional Poisson (LMACP) model to model highly persistent time series of counts. The model is applied to forecast quoted bid–ask spreads, a key parameter in stock trading operations. It is shown that the LMACP nicely captures salient features of bid–ask spreads like the strong autocorrelation and discreteness of observations. We discuss theoretical properties of LMACP models and evaluate rolling‐window forecasts of quoted bid–ask spreads for stocks traded at NYSE and NASDAQ. We show that Poisson time series models significantly outperform forecasts from AR, ARMA, ARFIMA, ACD and FIACD models. The economic significance of our results is supported by the evaluation of a trade schedule. Scheduling trades according to spread forecasts we realize cost savings of up to 14 % of spread transaction costs. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
This article describes Bayesian inference for autoregressive fractionally integrated moving average (ARFIMA) models using Markov chain Monte Carlo methods. The posterior distribution of the model parameters, corresponding to the exact likelihood function is obtained through the partial linear regression coefficients of the ARFIMA process. A Metropolis-Rao-Blackwellizallization approach is used for implementing sampling-based Bayesian inference. Bayesian model selection is discussed and implemented.  相似文献   

16.
A forecasting model for yt based on its relationship to exogenous variables (e.g. x?t) must use x?t, the forecast of x?t. An example is given where commercially available x?t's are sufficiently inaccurate that a univariate model for yt appears preferable. For a variety of types of models inclusion of an exogenous variable x?t is shown to worsen the yt forecasts whenever x?t must itself be forecast by x?t and MSE (x?t) > Var (x?t). Tests with forecasts from a variety of sources indicate that, with a few notable exceptions, MSE (x?t) > Var (x?t) is common for macroeconomic forecasts more than a quarter or two ahead. Thus, either:
  • (a) available medium range forecasts for many macroeconomic variables (e.g. the GNP growth rate) are not an improvement over the sample mean (so that such variables are not useful explanatory variables in forecasting models), and/or
  • (b) the suboptimization involved in directly replacing x?t by x?t is a luxury that we cannot afford.
  相似文献   

17.
The hedging of weather risks has become extremely relevant in recent years, promoting the diffusion of weather‐derivative contracts. The pricing of such contracts requires the development of appropriate models for the prediction of the underlying weather variables. Within this framework, a commonly used specification is the ARFIMA‐GARCH. We provide a generalization of such a model, introducing time‐varying memory coefficients. Our model satisfies the empirical evidence of the changing memory level observed in average temperature series, and provides useful improvements in the forecasting, simulation, and pricing issues related to weather derivatives. We present an application related to the forecast and simulation of a temperature index density, which is then used for the pricing of weather options. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
In this article we model the log of the US inflation rate by means of fractionally integrated processes. We use the tests of Robinson (1994) for testing this type of hypothesis, which include, as particular cases, the I(0) and I(1) specifications, and which also, unusually, have standard null and local limit distributions. A model selection criterion is established to determine which may be the best model specification of the series, and the forecasting properties of the selected models are also examined. The results vary substantially depending on how we specify the disturbances. Thus, if they are white noise, the series is I(d) with d fluctuating around 0.25; however, imposing autoregressive disturbances, the log of the US inflation rate seems to be anti‐persistent, with an order of integration smaller than zero. Looking at the forecasting properties, those models based on autocorrelated disturbances (with d < 0) predict better over a short horizon, while those based on white noise disturbances (with d > 0) seem to predict better over longer periods of time. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

19.
Changes in mortality rates have an impact on the life insurance industry, the financial sector (as a significant proportion of the financial markets is driven by pension funds), governmental agencies, and decision makers and policymakers. Thus the pricing of financial, pension and insurance products that are contingent upon survival or death and which is related to the accuracy of central mortality rates is of key importance. Recently, a temperature‐related mortality (TRM) model was proposed by Seklecka et al. (Journal of Forecasting, 2017, 36(7), 824–841), and it has shown evidence of outperformance compared with the Lee and Carter (Journal of the American Statistical Association, 1992, 87, 659–671) model and several others of its extensions, when mortality‐experience data from the UK are used. There is a need for awareness, when fitting the TRM model, of model risk when assessing longevity‐related liabilities, especially when pricing long‐term annuities and pensions. In this paper, the impact of uncertainty on the various parameters involved in the model is examined. We demonstrate a number of ways to quantify model risk in the estimation of the temperature‐related parameters, the choice of the forecasting methodology, the structures of actuarial products chosen (e.g., annuity, endowment and life insurance), and the actuarial reserve. Finally, several tables and figures illustrate the main findings of this paper.  相似文献   

20.
Upon the evidence that infinite‐order vector autoregression setting is more realistic in time series models, we propose new model selection procedures for producing efficient multistep forecasts. They consist of order selection criteria involving the sample analog of the asymptotic approximation of the h‐step‐ahead forecast mean squared error matrix, where h is the forecast horizon. These criteria are minimized over a truncation order nT under the assumption that an infinite‐order vector autoregression can be approximated, under suitable conditions, with a sequence of truncated models, where nT is increasing with sample size. Using finite‐order vector autoregressive models with various persistent levels and realistic sample sizes, Monte Carlo simulations show that, overall, our criteria outperform conventional competitors. Specifically, they tend to yield better small‐sample distribution of the lag‐order estimates around the true value, while estimating it with relatively satisfactory probabilities. They also produce more efficient multistep (and even stepwise) forecasts since they yield the lowest h‐step‐ahead forecast mean squared errors for the individual components of the holding pseudo‐data to forecast. Thus estimating the actual autoregressive order as well as the best forecasting model can be achieved with the same selection procedure. Such results stand in sharp contrast to the belief that parsimony is a virtue in itself, and state that the relative accuracy of strongly consistent criteria such as the Schwarz information criterion, as claimed in the literature, is overstated. Our criteria are new tools extending those previously existing in the literature and hence can suitably be used for various practical situations when necessary. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号