共查询到20条相似文献,搜索用时 15 毫秒
1.
We present a methodology for estimation, prediction, and model assessment of vector autoregressive moving-average (VARMA) models in the Bayesian framework using Markov chain Monte Carlo algorithms. The sampling-based Bayesian framework for inference allows for the incorporation of parameter restrictions, such as stationarity restrictions or zero constraints, through appropriate prior specifications. It also facilitates extensive posterior and predictive analyses through the use of numerical summary statistics and graphical displays, such as box plots and density plots for estimated parameters. We present a method for computationally feasible evaluation of the joint posterior density of the model parameters using the exact likelihood function, and discuss the use of backcasting to approximate the exact likelihood function in certain cases. We also show how to incorporate indicator variables as additional parameters for use in coefficient selection. The sampling is facilitated through a Metropolis–Hastings algorithm. Graphical techniques based on predictive distributions are used for informal model assessment. The methods are illustrated using two data sets from business and economics. The first example consists of quarterly fixed investment, disposable income, and consumption rates for West Germany, which are known to have correlation and feedback relationships between series. The second example consists of monthly revenue data from seven different geographic areas of IBM. The revenue data exhibit seasonality, strong inter-regional dependence, and feedback relationships between certain regions.© 1997 John Wiley & Sons, Ltd. 相似文献
2.
Asymmetry has been well documented in the business cycle literature. The asymmetric business cycle suggests that major macroeconomic series, such as a country's unemployment rate, are non‐linear and, therefore, the use of linear models to explain their behaviour and forecast their future values may not be appropriate. Many researchers have focused on providing evidence for the non‐linearity in the unemployment series. Only recently have there been some developments in applying non‐linear models to estimate and forecast unemployment rates. A major concern of non‐linear modelling is the model specification problem; it is very hard to test all possible non‐linear specifications, and to select the most appropriate specification for a particular model. Artificial neural network (ANN) models provide a solution to the difficulty of forecasting unemployment over the asymmetric business cycle. ANN models are non‐linear, do not rely upon the classical regression assumptions, are capable of learning the structure of all kinds of patterns in a data set with a specified degree of accuracy, and can then use this structure to forecast future values of the data. In this paper, we apply two ANN models, a back‐propagation model and a generalized regression neural network model to estimate and forecast post‐war aggregate unemployment rates in the USA, Canada, UK, France and Japan. We compare the out‐of‐sample forecast results obtained by the ANN models with those obtained by several linear and non‐linear times series models currently used in the literature. It is shown that the artificial neural network models are able to forecast the unemployment series as well as, and in some cases better than, the other univariate econometrics time series models in our test. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
3.
Martin R. Young 《Journal of forecasting》1996,15(5):355-367
Akaike's BAYSEA approach to seasonal decomposition is designed to capture the respective merits of several pre-existing adjustment techniques. BAYSEA is computationally efficient, requires only weak assumptions about the data-generating process, and is based on solid inferential (namely, Bayesian) foundations. We present a model similar to that used in BAYSEA, but based on a double exponential rather than a Gaussian error model. The resulting procedure has the advantages of Akaike's method, but in addition is resistant to outliers. The optimal decomposition is obtained rapidly using a sparse linear programming code. Confidence bands and predictive intervals can be obtained using Gibbs sampling. 相似文献
4.
The problem of estimating unknown observational variances in multivariate dynamic linear models is considered. Conjugate procedures are possible for univariate models and also for special very restrictive common components models but they are not generally applicable. However, for clarity of operation and in order to avoid numerical integration, it is desirable to have conjugacy or approximate conjugacy. Such an approximate procedure is proposed based upon a simple analytic approximation. It is exact for the sub-class of conjugate models and improves on a previous procedure based upon the Robust filter. 相似文献
5.
Chris Brooks 《Journal of forecasting》2001,20(2):135-143
This paper combines and generalizes a number of recent time series models of daily exchange rate series by using a SETAR model which also allows the variance equation of a GARCH specification for the error terms to be drawn from more than one regime. An application of the model to the French Franc/Deutschmark exchange rate demonstrates that out‐of‐sample forecasts for the exchange rate volatility are also improved when the restriction that the data it is drawn from a single regime is removed. This result highlights the importance of considering both types of regime shift (i.e. thresholds in variance as well as in mean) when analysing financial time series. Copyright © 2000 John Wiley & Sons, Ltd. 相似文献
6.
Michael J. Artis Anindya Banerjee Massimiliano Marcellino 《Journal of forecasting》2005,24(4):279-298
Data are now readily available for a very large number of macroeconomic variables that are potentially useful when forecasting. We argue that recent developments in the theory of dynamic factor models enable such large data sets to be summarized by relatively few estimated factors, which can then be used to improve forecast accuracy. In this paper we construct a large macroeconomic data set for the UK, with about 80 variables, model it using a dynamic factor model, and compare the resulting forecasts with those from a set of standard time‐series models. We find that just six factors are sufficient to explain 50% of the variability of all the variables in the data set. These factors, which can be shown to be related to key variables in the economy, and their use leads to considerable improvements upon standard time‐series benchmarks in terms of forecasting performance. Copyright © 2005 John Wiley & Sons, Ltd. 相似文献
7.
A non‐linear dynamic model is introduced for multiplicative seasonal time series that follows and extends the X‐11 paradigm where the observed time series is a product of trend, seasonal and irregular factors. A selection of standard seasonal and trend component models used in additive dynamic time series models are adapted for the multiplicative framework and a non‐linear filtering procedure is proposed. The results are illustrated and compared to X‐11 and log‐additive models using real data. In particular it is shown that the new procedures do not suffer from the trend bias present in log‐additive models. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
8.
Paul Newbold 《Journal of forecasting》1983,2(1):23-35
This paper reviews the approach to forecasting based on the construction of ARIMA time series models. Recent developments in this area are surveyed, and the approach is related to other forecasting methodologies. 相似文献
9.
This paper presents short‐term forecasting methods applied to electricity consumption in Brazil. The focus is on comparing the results obtained after using two distinct approaches: dynamic non‐linear models and econometric models. The first method, that we propose, is based on structural statistical models for multiple time series analysis and forecasting. It involves non‐observable components of locally linear trends for each individual series and a shared multiplicative seasonal component described by dynamic harmonics. The second method, adopted by the electricity power utilities in Brazil, consists of extrapolation of the past data and is based on statistical relations of simple or multiple regression type. To illustrate the proposed methodology, a numerical application is considered with real data. The data represents the monthly industrial electricity consumption in Brazil from the three main power utilities: Eletropaulo, Cemig and Light, situated at the major energy‐consuming states, Sao Paulo, Rio de Janeiro and Minas Gerais, respectively, in the Brazilian Southeast region. The chosen time period, January 1990 to September 1994, corresponds to an economically unstable period just before the beginning of the Brazilian Privatization Program. Implementation of the algorithms considered in this work was made via the statistical software S‐PLUS. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
10.
An underlying assumption in Multivariate Singular Spectrum Analysis (MSSA) is that the time series are governed by a linear recurrent continuation. However, in the presence of a structural break, multiple series can be transferred from one homogeneous state to another over a comparatively short time breaking this assumption. As a consequence, forecasting performance can degrade significantly. In this paper, we propose a state-dependent model to incorporate the movement of states in the linear recurrent formula called a State-Dependent Multivariate SSA (SD-MSSA) model. The proposed model is examined for its reliability in the presence of a structural break by conducting an empirical analysis covering both synthetic and real data. Comparison with standard MSSA, BVAR, VAR and VECM models shows the proposed model outperforms all three models significantly. 相似文献
11.
Victor M. Guerrero 《Journal of forecasting》1993,12(1):37-48
This paper presents some procedures aimed at helping an applied time-series analyst in the use of power transformations. Two methods are proposed for selecting a variance-stabilizing transformation and another for bias-reduction of the forecast in the original scale. Since these methods are essentially model-independent, they can be employed with practically any type of time-series model. Some comparisons are made with other methods currently available and it is shown that those proposed here are either easier to apply or are more general, with a performance similar to or better than other competing procedures. 相似文献
12.
In their seminal book Time Series Analysis: Forecasting and Control, Box and Jenkins (1976) introduce the Airline model, which is still routinely used for the modelling of economic seasonal time series. The Airline model is for a differenced time series (in levels and seasons) and constitutes a linear moving average of lagged Gaussian disturbances which depends on two coefficients and a fixed variance. In this paper a novel approach to seasonal adjustment is developed that is based on the Airline model and that accounts for outliers and breaks in time series. For this purpose we consider the canonical representation of the Airline model. It takes the model as a sum of trend, seasonal and irregular (unobserved) components which are uniquely identified as a result of the canonical decomposition. The resulting unobserved components time series model is extended by components that allow for outliers and breaks. When all components depend on Gaussian disturbances, the model can be cast in state space form and the Kalman filter can compute the exact log‐likelihood function. Related filtering and smoothing algorithms can be used to compute minimum mean squared error estimates of the unobserved components. However, the outlier and break components typically rely on heavy‐tailed densities such as the t or the mixture of normals. For this class of non‐Gaussian models, Monte Carlo simulation techniques will be used for estimation, signal extraction and seasonal adjustment. This robust approach to seasonal adjustment allows outliers to be accounted for, while keeping the underlying structures that are currently used to aid reporting of economic time series data. Copyright © 2006 John Wiley & Sons, Ltd. 相似文献
13.
Umberto Triacca 《Journal of forecasting》2002,21(8):595-599
In time series analysis, a vector Y is often called causal for another vector X if the former helps to improve the k‐step‐ahead forecast of the latter. If this holds for k=1, vector Y is commonly called Granger‐causal for X . It has been shown in several studies that the finding of causality between two (vectors of) variables is not robust to changes of the information set. In this paper, using the concept of Hilbert spaces, we derive a condition under which the predictive relationships between two vectors are invariant to the selection of a bivariate or trivariate framework. In more detail, we provide a condition under which the finding of causality (improved predictability at forecast horizon 1) respectively non‐causality of Y for X is unaffected if the information set is either enlarged or reduced by the information in a third vector Z . This result has a practical usefulness since it provides a guidance to validate the choice of the bivariate system { X , Y } in place of { X , Y , Z }. In fact, to test the ‘goodness’ of { X , Y } we should test whether Z Granger cause X not requiring the joint analysis of all variables in { X , Y , Z }. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
14.
Poisson integer‐valued auto‐regressive process of order 1 (PINAR(1)) due to Al‐Osh and Alzaid (Journal of Time Series Analysis 1987; 8 (3): 261–275) and McKenzie (Advances in Applied Probability 1988; 20 (4): 822–835) has received a significant attention in modelling low‐count time series during the last two decades because of its simplicity. But in many practical scenarios, the process appears to be inadequate, especially when data are overdispersed in nature. This overdispersion occurs mainly for three reasons: presence of some extreme values, large number of zeros, and presence of both extreme values with a large number of zeros. In this article, we develop a zero‐inflated Poisson INAR(1) process as an alternative to the PINAR(1) process when the number of zeros in the data is larger than the expected number of zeros by the Poisson process. We investigate some important properties such as stationarity, ergodicity, autocorrelation structure, and conditional distribution, with a detailed study on h‐step‐ahead coherent forecasting. A comparative study among different methods of parameter estimation is carried out using some simulated data. One real dataset is analysed for practical illustration. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
15.
Rob J. Hyndman 《Journal of forecasting》1995,14(5):431-441
Forecast regions are a common way to summarize forecast accuracy. They usually consist of an interval symmetric about the forecast mean. However, symmetric intervals may not be appropriate forecast regions when the forecast density is not symmetric and unimodal. With many modern time series models, such as those which are non-linear or have non-normal errors, the forecast densities are often asymmetric or multimodal. The problem of obtaining forecast regions in such cases is considered and it is proposed that highest-density forecast regions be used. A graphical method for presenting the results is discussed. 相似文献
16.
Artificial neural network modelling has recently attracted much attention as a new technique for estimation and forecasting in economics and finance. The chief advantages of this new approach are that such models can usually find a solution for very complex problems, and that they are free from the assumption of linearity that is often adopted to make the traditional methods tractable. In this paper we compare the performance of Back‐Propagation Artificial Neural Network (BPN) models with the traditional econometric approaches to forecasting the inflation rate. Of the traditional econometric models we use a structural reduced‐form model, an ARIMA model, a vector autoregressive model, and a Bayesian vector autoregression model. We compare each econometric model with a hybrid BPN model which uses the same set of variables. Dynamic forecasts are compared for three different horizons: one, three and twelve months ahead. Root mean squared errors and mean absolute errors are used to compare quality of forecasts. The results show the hybrid BPN models are able to forecast as well as all the traditional econometric methods, and to outperform them in some cases. Copyright © 2000 John Wiley & Sons, Ltd. 相似文献
17.
One important aspect concerning the analysis and forecasting of time series that is sometimes neglected is the relationship between a model and the sampling interval, in particular, when the observation is cumulative over the sampling period. This paper intends to study the temporal aggregation in Bayesian dynamic linear models (DLM). Suppose that a time series Yt is observed at time units t and the observations of the process are aggregated over r units of time, defining a new time series Zk=Σri=1Yrk+i. The relevant factors explaining the variation of Zk can, and in general will, be different, depending on how the sampling interval r is chosen. It is shown that if Yt follows certain dynamic linear models, then the aggregated series can also be described by possibly different DLM. In the examples, the industrial production of Brazil is analysed under various aggregation periods and the results are compared. © 1997 John Wiley & Sons, Ltd. 相似文献
18.
This study addresses for the first time systematic evaluation of a widely used class of forecasts, regional economic forecasts. Ex ante regional structural equation model forecasts are analysed for 19 metropolitan areas. One- to ten-quarter-ahead forecasts are considered and the seven-year sample spans a complete business cycle. Counter to previous speculation in the literature, (1) dependency on macroeconomic forecasting model inputs does not substantially erode accuracy relative to univariate extrapolative methodologies and (2) stochastic time series models do not on average, yield more accurate regional economic predictions than structural models. Similar to findings in other studies, clear preferences among extrapolative methodologies do not emerge. Most general conclusions, however, are subject to caveats based on step-length effects and region-specific effects. 相似文献
19.
Chahid Ahabchane;Tolga Cenesizoglu;Gunnar Grass;Sanjay Dominik Jena; 《Journal of forecasting》2024,43(8):2982-3008
Market participants who need to trade a significant number of securities within a given period can face high transaction costs. In this paper, we document how improvements in intraday liquidity forecasts can help reduce total transaction costs. We compare various approaches for forecasting intraday transaction costs, including autoregressive and machine learning models, using comprehensive ultra-high-frequency limit order book data for a sample of NYSE stocks from 2002 to 2012. Our results indicate that improved liquidity forecasts can significantly decrease total transaction costs. Simple models capturing seasonality in market liquidity tend to outperform alternative models. 相似文献
20.
Bovas Abraham 《Journal of forecasting》1993,12(5):449-458
The practice of modelling the components of a vector time series to arrive at a joint model for the vector is considered. It is shown that in some cases this is not unreasonable. A vector ARMA model is used to model the Canadian money and income data. We also use these data to discuss the issue of differencing a multiple time series. Finally, models based on first and second differences are compared using forecasts. 相似文献