首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article develops and extends previous investigations on the temporal aggregation of ARMA predications. Given a basic ARMA model for disaggregated data, two sets of predictors may be constructed for future temporal aggregates: predictions based on models utilizing aggregated data or on models constructed from disaggregated data for which forecasts are updated as soon as the new information becomes available. We show that considerable gains in efficiency based on mean‐square‐error‐type criteria can be obtained for short‐term predications when using models based on updated disaggregated data. However, as the prediction horizon increases, the gain in using updated disaggregated data diminishes substantially. In addition to theoretical results associated with forecast efficiency of ARMA models, we also illustrate our findings with two well‐known time series. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
An approach is proposed for obtaining estimates of the basic (disaggregated) series, xi, when only an aggregate series, yt, of k period non-overlapping sums of xi's is available. The approach is based on casting the problem in a dynamic linear model form. Then estimates of xi can be obtained by application of the Kalman filtering techniques. An ad hoc procedure is introduced for deriving a model form for the unobserved basic series from the observed model of the aggregates. An application of this approach to a set of real data is given.  相似文献   

3.
This paper examines the problem of forecasting macro‐variables which are observed monthly (or quarterly) and result from geographical and sectorial aggregation. The aim is to formulate a methodology whereby all relevant information gathered in this context could provide more accurate forecasts, be frequently updated, and include a disaggregated explanation as useful information for decision‐making. The appropriate treatment of the resulting disaggregated data set requires vector modelling, which captures the long‐run restrictions between the different time series and the short‐term correlations existing between their stationary transformations. Frequently, due to a lack of degrees of freedom, the vector model must be restricted to a block‐diagonal vector model. This methodology is applied in this paper to inflation in the euro area, and shows that disaggregated models with cointegration restrictions improve accuracy in forecasting aggregate macro‐variables. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

4.
The paper presents a comparative real‐time analysis of alternative indirect estimates relative to monthly euro area employment. In the experiment quarterly employment is temporally disaggregated using monthly unemployment as related series. The strategies under comparison make use of the contribution of sectoral data of the euro area and its six larger member states. The comparison is carried out among univariate temporal disaggregations of the Chow and Lin type and multivariate structural time series models of small and medium size. Specifications in logarithms are also systematically assessed. All multivariate set‐ups, up to 49 series modelled simultaneously, are estimated via the EM algorithm. Main conclusions are that mean revision errors of disaggregated estimates are overall small, a gain is obtained when the model strategy takes into account the information by both sector and member state and that larger multivariate set‐ups perform very well, with several advantages with respect to simpler models.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

5.
Asymmetry has been well documented in the business cycle literature. The asymmetric business cycle suggests that major macroeconomic series, such as a country's unemployment rate, are non‐linear and, therefore, the use of linear models to explain their behaviour and forecast their future values may not be appropriate. Many researchers have focused on providing evidence for the non‐linearity in the unemployment series. Only recently have there been some developments in applying non‐linear models to estimate and forecast unemployment rates. A major concern of non‐linear modelling is the model specification problem; it is very hard to test all possible non‐linear specifications, and to select the most appropriate specification for a particular model. Artificial neural network (ANN) models provide a solution to the difficulty of forecasting unemployment over the asymmetric business cycle. ANN models are non‐linear, do not rely upon the classical regression assumptions, are capable of learning the structure of all kinds of patterns in a data set with a specified degree of accuracy, and can then use this structure to forecast future values of the data. In this paper, we apply two ANN models, a back‐propagation model and a generalized regression neural network model to estimate and forecast post‐war aggregate unemployment rates in the USA, Canada, UK, France and Japan. We compare the out‐of‐sample forecast results obtained by the ANN models with those obtained by several linear and non‐linear times series models currently used in the literature. It is shown that the artificial neural network models are able to forecast the unemployment series as well as, and in some cases better than, the other univariate econometrics time series models in our test. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

6.
In their seminal book Time Series Analysis: Forecasting and Control, Box and Jenkins (1976) introduce the Airline model, which is still routinely used for the modelling of economic seasonal time series. The Airline model is for a differenced time series (in levels and seasons) and constitutes a linear moving average of lagged Gaussian disturbances which depends on two coefficients and a fixed variance. In this paper a novel approach to seasonal adjustment is developed that is based on the Airline model and that accounts for outliers and breaks in time series. For this purpose we consider the canonical representation of the Airline model. It takes the model as a sum of trend, seasonal and irregular (unobserved) components which are uniquely identified as a result of the canonical decomposition. The resulting unobserved components time series model is extended by components that allow for outliers and breaks. When all components depend on Gaussian disturbances, the model can be cast in state space form and the Kalman filter can compute the exact log‐likelihood function. Related filtering and smoothing algorithms can be used to compute minimum mean squared error estimates of the unobserved components. However, the outlier and break components typically rely on heavy‐tailed densities such as the t or the mixture of normals. For this class of non‐Gaussian models, Monte Carlo simulation techniques will be used for estimation, signal extraction and seasonal adjustment. This robust approach to seasonal adjustment allows outliers to be accounted for, while keeping the underlying structures that are currently used to aid reporting of economic time series data. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
Since volatility is perceived as an explicit measure of risk, financial economists have long been concerned with accurate measures and forecasts of future volatility and, undoubtedly, the Generalized Autoregressive Conditional Heteroscedasticity (GARCH) model has been widely used for doing so. It appears, however, from some empirical studies that the GARCH model tends to provide poor volatility forecasts in the presence of additive outliers. To overcome the forecasting limitation, this paper proposes a robust GARCH model (RGARCH) using least absolute deviation estimation and introduces a valuable estimation method from a practical point of view. Extensive Monte Carlo experiments substantiate our conjectures. As the magnitude of the outliers increases, the one‐step‐ahead forecasting performance of the RGARCH model has a more significant improvement in two forecast evaluation criteria over both the standard GARCH and random walk models. Strong evidence in favour of the RGARCH model over other competitive models is based on empirical application. By using a sample of two daily exchange rate series, we find that the out‐of‐sample volatility forecasts of the RGARCH model are apparently superior to those of other competitive models. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

8.
We investigate the optimal structure of dynamic regression models used in multivariate time series prediction and propose a scheme to form the lagged variable structure called Backward‐in‐Time Selection (BTS), which takes into account feedback and multicollinearity, often present in multivariate time series. We compare BTS to other known methods, also in conjunction with regularization techniques used for the estimation of model parameters, namely principal components, partial least squares and ridge regression estimation. The predictive efficiency of the different models is assessed by means of Monte Carlo simulations for different settings of feedback and multicollinearity. The results show that BTS has consistently good prediction performance, while other popular methods have varying and often inferior performance. The prediction performance of BTS was also found the best when tested on human electroencephalograms of an epileptic seizure, and for the prediction of returns of indices of world financial markets.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we present an intelligent decision‐support system based on neural network technology for model selection and forecasting. While most of the literature on the application of neural networks in forecasting addresses the use of neural network technology as an alternative forecasting tool, limited research has focused on its use for selection of forecasting methods based on time‐series characteristics. In this research, a neural network‐based decision support system is presented as a method for forecast model selection. The neural network approach provides a framework for directly incorporating time‐series characteristics into the model‐selection phase. Using a neural network, a forecasting group is initially selected for a given data set, based on a set of time‐series characteristics. Then, using an additional neural network, a specific forecasting method is selected from a pool of three candidate methods. The results of training and testing of the networks are presented along with conclusions. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

10.
The growing affluence of the East and Southeast Asian economies has come about through a substantial increase in their economic links with the rest of the world, the OECD economies in particular. Econometric studies that try to quantify these links face a severe shortage of high‐frequency time series data for China and the group of ASEAN4 (Indonesia, Malaysia, Philippines and Thailand). In this paper we provide quarterly real GDP estimates for these countries derived by applying the Chow–Lin related series technique to annual real GDP series. The quality of the disaggregated series is evaluated through a number of indirect methods. Some potential problems of using readily available univariate disaggregation techniques are also highlighted. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
Multifractal models have recently been introduced as a new type of data‐generating process for asset returns and other financial data. Here we propose an adaptation of this model for realized volatility. We estimate this new model via generalized method of moments and perform forecasting by means of best linear forecasts derived via the Levinson–Durbin algorithm. Its out‐of‐sample performance is compared against other popular time series specifications. Using an intra‐day dataset for five major international stock market indices, we find that the the multifractal model for realized volatility improves upon forecasts of its earlier counterparts based on daily returns and of many other volatility models. While the more traditional RV‐ARFIMA model comes out as the most successful model (in terms of the number of cases in which it has the best forecasts for all combinations of forecast horizons and evaluation criteria), the new model performs often significantly better during the turbulent times of the recent financial crisis. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
Using a structural time‐series model, the forecasting accuracy of a wide range of macroeconomic variables is investigated. Specifically of importance is whether the Henderson moving‐average procedure distorts the underlying time‐series properties of the data for forecasting purposes. Given the weight of attention in the literature to the seasonal adjustment process used by various statistical agencies, this study hopes to address the dearth of literature on ‘trending’ procedures. Forecasts using both the trended and untrended series are generated. The forecasts are then made comparable by ‘detrending’ the trended forecasts, and comparing both series to the realised values. Forecasting accuracy is measured by a suite of common methods, and a test of significance of difference is applied to the respective root mean square errors. It is found that the Henderson procedure does not lead to deterioration in forecasting accuracy in Australian macroeconomic variables on most occasions, though the conclusions are very different between the one‐step‐ahead and multi‐step‐ahead forecasts. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
This paper considers the problem of forecasting high‐dimensional time series. It employs a robust clustering approach to perform classification of the component series. Each series within a cluster is assumed to follow the same model and the data are then pooled for estimation. The classification is model‐based and robust to outlier contamination. The robustness is achieved by using the intrinsic mode functions of the Hilbert–Huang transform at lower frequencies. These functions are found to be robust to outlier contamination. The paper also compares out‐of‐sample forecast performance of the proposed method with several methods available in the literature. The other forecasting methods considered include vector autoregressive models with ∕ without LASSO, group LASSO, principal component regression, and partial least squares. The proposed method is found to perform well in out‐of‐sample forecasting of the monthly unemployment rates of 50 US states. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

14.
We propose a wavelet neural network (neuro‐wavelet) model for the short‐term forecast of stock returns from high‐frequency financial data. The proposed hybrid model combines the capability of wavelets and neural networks to capture non‐stationary nonlinear attributes embedded in financial time series. A comparison study was performed on the predictive power of two econometric models and four recurrent neural network topologies. Several statistical measures were applied to the predictions and standard errors to evaluate the performance of all models. A Jordan net that used as input the coefficients resulting from a non‐decimated wavelet‐based multi‐resolution decomposition of an exogenous signal showed a consistent superior forecasting performance. Reasonable forecasting accuracy for the one‐, three‐ and five step‐ahead horizons was achieved by the proposed model. The procedure used to build the neuro‐wavelet model is reusable and can be applied to any high‐frequency financial series to specify the model characteristics associated with that particular series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
We present the results on the comparison of efficiency of approximate Bayesian methods for the analysis and forecasting of non‐Gaussian dynamic processes. A numerical algorithm based on MCMC methods has been developed to carry out the Bayesian analysis of non‐linear time series. Although the MCMC‐based approach is not fast, it allows us to study the efficiency, in predicting future observations, of approximate propagation procedures that, being algebraic, have the practical advantage of being very quick. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

16.
Although both direct multi‐step‐ahead forecasting and iterated one‐step‐ahead forecasting are two popular methods for predicting future values of a time series, it is not clear that the direct method is superior in practice, even though from a theoretical perspective it has lower mean squared error (MSE). A given model can be fitted according to either a multi‐step or a one‐step forecast error criterion, and we show here that discrepancies in performance between direct and iterative forecasting arise chiefly from the method of fitting, and is dictated by the nuances of the model's misspecification. We derive new formulas for quantifying iterative forecast MSE, and present a new approach for assessing asymptotic forecast MSE. Finally, the direct and iterative methods are compared on a retail series, which illustrates the strengths and weaknesses of each approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Adaptive exponential smoothing methods allow a smoothing parameter to change over time, in order to adapt to changes in the characteristics of the time series. However, these methods have tended to produce unstable forecasts and have performed poorly in empirical studies. This paper presents a new adaptive method, which enables a smoothing parameter to be modelled as a logistic function of a user‐specified variable. The approach is analogous to that used to model the time‐varying parameter in smooth transition models. Using simulated data, we show that the new approach has the potential to outperform existing adaptive methods and constant parameter methods when the estimation and evaluation samples both contain a level shift or both contain an outlier. An empirical study, using the monthly time series from the M3‐Competition, gave encouraging results for the new approach. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

18.
It is investigated whether euro area variables can be forecast better based on synthetic time series for the pre‐euro period or by using just data from Germany for the pre‐euro period. Our forecast comparison is based on quarterly data for the period 1970Q1–2003Q4 for 10 macroeconomic variables. The years 2000–2003 are used as forecasting period. A range of different univariate forecasting methods is applied. Some of them are based on linear autoregressive models and we also use some nonlinear or time‐varying coefficient models. It turns out that most variables which have a similar level for Germany and the euro area such as prices can be better predicted based on German data, while aggregated European data are preferable for forecasting variables which need considerable adjustments in their levels when joining German and European Monetary Union (EMU) data. These results suggest that for variables which have a similar level for Germany and the euro area it may be reasonable to consider the German pre‐EMU data for studying economic problems in the euro area. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
Tests of forecast encompassing are used to evaluate one‐step‐ahead forecasts of S&P Composite index returns and volatility. It is found that forecasts over the 1990s made from models that include macroeconomic variables tend to be encompassed by those made from a benchmark model which does not include macroeconomic variables. However, macroeconomic variables are found to add significant information to forecasts of returns and volatility over the 1970s. Often in empirical research on forecasting stock index returns and volatility, in‐sample information criteria are used to rank potential forecasting models. Here, none of the forecasting models for the 1970s that include macroeconomic variables are, on the basis of information criteria, preferred to the relevant benchmark specification. Thus, had investors used information criteria to choose between the models used for forecasting over the 1970s considered in this paper, the predictability that tests of encompassing reveal would not have been exploited. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号