首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We are grateful to the twelve discussants for their many insightful and constructive comments on our paper, although we are surprised by both the number of discussants and their near unanimity that the paper was ‘provocative’. We have organized our reply under seven headings, concerned, respectively, with method comparison versus model comparison; the role of the forecast horizon; the choice of loss function; the GFESM measure; the choice of information set; truth versus congruence; and the issue of testing versus comparisons.  相似文献   

2.
This paper examines a strategy for structuring one type of domain knowledge for use in extrapolation. It does so by representing information about causality and using this domain knowledge to select and combine forecasts. We use five categories to express causal impacts upon trends: growth, decay, supporting, opposing, and regressing. An identification of causal forces aided in the determination of weights for combining extrapolation forecasts. These weights improved average ex ante forecast accuracy when tested on 104 annual economic and demographic time series. Gains in accuracy were greatest when (1) the causal forces were clearly specified and (2) stronger causal effects were expected, as in longer-range forecasts. One rule suggested by this analysis was: ‘Do not extrapolate trends if they are contrary to causal forces.’ We tested this rule by comparing forecasts from a method that implicitly assumes supporting trends (Holt's exponential smoothing) with forecasts from the random walk. Use of the rule improved accuracy for 20 series where the trends were contrary; the MdAPE (Median Absolute Percentage Error) was 18% less for the random walk on 20 one-year ahead forecasts and 40% less for 20 six-year-ahead forecasts. We then applied the rule to four other data sets. Here, the MdAPE for the random walk forecasts was 17% less than Holt's error for 943 short-range forecasts and 43% less for 723 long-range forecasts. Our study suggests that the causal assumptions implicit in traditional extrapolation methods are inappropriate for many applications.  相似文献   

3.
This paper introduces the idea of adjusting forecasts from a linear time series model where the adjustment relies on the assumption that this linear model is an approximation of a nonlinear time series model. This way of creating forecasts could be convenient when inference for a nonlinear model is impossible, complicated or unreliable in small samples. The size of the forecast adjustment can be based on the estimation results for the linear model and on other data properties such as the first few moments or autocorrelations. An illustration is given for a first‐order diagonal bilinear time series model, which in certain properties can be approximated by a linear ARMA(1, 1) model. For this case, the forecast adjustment is easy to derive, which is convenient as the particular bilinear model is indeed cumbersome to analyze in practice. An application to a range of inflation series for low‐income countries shows that such adjustment can lead to some improved forecasts, although the gain is small for this particular bilinear time series model.  相似文献   

4.
In their seminal book Time Series Analysis: Forecasting and Control, Box and Jenkins (1976) introduce the Airline model, which is still routinely used for the modelling of economic seasonal time series. The Airline model is for a differenced time series (in levels and seasons) and constitutes a linear moving average of lagged Gaussian disturbances which depends on two coefficients and a fixed variance. In this paper a novel approach to seasonal adjustment is developed that is based on the Airline model and that accounts for outliers and breaks in time series. For this purpose we consider the canonical representation of the Airline model. It takes the model as a sum of trend, seasonal and irregular (unobserved) components which are uniquely identified as a result of the canonical decomposition. The resulting unobserved components time series model is extended by components that allow for outliers and breaks. When all components depend on Gaussian disturbances, the model can be cast in state space form and the Kalman filter can compute the exact log‐likelihood function. Related filtering and smoothing algorithms can be used to compute minimum mean squared error estimates of the unobserved components. However, the outlier and break components typically rely on heavy‐tailed densities such as the t or the mixture of normals. For this class of non‐Gaussian models, Monte Carlo simulation techniques will be used for estimation, signal extraction and seasonal adjustment. This robust approach to seasonal adjustment allows outliers to be accounted for, while keeping the underlying structures that are currently used to aid reporting of economic time series data. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

5.
We analyse the forecasting attributes of trenc and diffence-stationary representations of the U.S. macroeconomic time series sudied by Nelson and Plosser (1982). Predictive densities based on models estimated for these series (which terminate in 1970) are compared with subsequent realizations compiled by Schotman and van Dijk (1991) which terminate in (1988). Predictive densities obtained using the, extended series are also derived to assess the impact of the subsequent realization on long-range forecasts. Of particular interest are comparisons of the average intervals of predictive densities corresponding to the competing specifications In general, we find that coverage intervals based on diference-stationary specifications are far wider than those based or. trend-stationary specifications for the real series, and slightly wider for the nominal series. This additional width is often a virtue in forecasting nuninal series over the 1971-1988 period, as the inflation experienced durnig this time was unprecedented in the 1900s. However, the evolution of the real series has been relatively stable in the 1900s, hence the uncertainty associated with difference-stationary specifications generally seems excessive for these data.  相似文献   

6.
It is investigated whether euro area variables can be forecast better based on synthetic time series for the pre‐euro period or by using just data from Germany for the pre‐euro period. Our forecast comparison is based on quarterly data for the period 1970Q1–2003Q4 for 10 macroeconomic variables. The years 2000–2003 are used as forecasting period. A range of different univariate forecasting methods is applied. Some of them are based on linear autoregressive models and we also use some nonlinear or time‐varying coefficient models. It turns out that most variables which have a similar level for Germany and the euro area such as prices can be better predicted based on German data, while aggregated European data are preferable for forecasting variables which need considerable adjustments in their levels when joining German and European Monetary Union (EMU) data. These results suggest that for variables which have a similar level for Germany and the euro area it may be reasonable to consider the German pre‐EMU data for studying economic problems in the euro area. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper we present an intelligent decision‐support system based on neural network technology for model selection and forecasting. While most of the literature on the application of neural networks in forecasting addresses the use of neural network technology as an alternative forecasting tool, limited research has focused on its use for selection of forecasting methods based on time‐series characteristics. In this research, a neural network‐based decision support system is presented as a method for forecast model selection. The neural network approach provides a framework for directly incorporating time‐series characteristics into the model‐selection phase. Using a neural network, a forecasting group is initially selected for a given data set, based on a set of time‐series characteristics. Then, using an additional neural network, a specific forecasting method is selected from a pool of three candidate methods. The results of training and testing of the networks are presented along with conclusions. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

8.
We analyse the price movement of the S&P 500 futures market for violations of the efficient market hypothesis on a short-term basis. To assess market inefficiency we construct a model and find that the returns, i.e. the difference in the logarithm of closing prices on consecutive days, exhibit the usual conditional heteroscedasticity behaviour typical of long series of financial data. To account for this non-linear behaviour we scale the returns by a volatility factor which depends on the daily high, low, and closing price. The rescaled series, which may be interpreted as the trend-countertrend component of the time series, is modelled using Box and Jenkins techniques. The resulting model is an ARMA(1,1). The scale factors are assumed to form a time series and are modelled using a semi-non-parametric method which avoids the restrictive assumptions of most ARCH or GARCH models. Using the combined model we perform 1000 simulations of market data, each simulation comprising 250 days (approximately one year). We then formulate a naive trading strategy which is based on the ratio of the one-day-ahead expected return to its one-day-ahead expected conditional standard deviation. The trading strategy has four adjustable parameters which are set to maximize profits for the simulation data. Next, we apply the trading strategy to one year of recent out-of-sample data. Our conclusion is that the S&P 500 futures market exhibits only slight inefficiencies, but that there exist, in principle, better trading strategies which take account of risk than the benchmark strategy of buy-and-hold. We have also constructed a linear model for the return series. Using the linear model, we have simulated returns and determined the optimum values for the adjustable parameters of the trading strategy. In this case, the optimum trading strategy is the same as the benchmark strategy, buy-and-hold. Finally, we have compared the profitability of the optimized trading strategy, based on the non-linear model, to three ad hoc trading strategies using the out-of-sample data. The three ad hoc strategies are more profitable than the optimized strategy.  相似文献   

9.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Multivariate time series describing relative contributions to a total (like proportional data) are called compositional time series. They need to be transformed first to the usual Euclidean geometry before a time series model is fitted. It is shown how an appropriate transformation can be chosen, resulting in coordinates with respect to the Aitchison geometry of compositional data. Using vector autoregressive models, the standard approach based on raw data is compared with the compositional approach based on transformed data. The results from the compositional approach are consistent with the relative nature of the observations, while the analysis of the raw data leads to several inconsistencies and artifacts. The compositional approach is extended to the case when also the total of the compositional parts is of interest. Moreover, a concise methodology for an interpretation of the coordinates in the transformed space together with the corresponding statistical inference (like hypotheses testing) is provided. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
This paper focuses on the effects of disaggregation on forecast accuracy for nonstationary time series using dynamic factor models. We compare the forecasts obtained directly from the aggregated series based on its univariate model with the aggregation of the forecasts obtained for each component of the aggregate. Within this framework (first obtain the forecasts for the component series and then aggregate the forecasts), we try two different approaches: (i) generate forecasts from the multivariate dynamic factor model and (ii) generate the forecasts from univariate models for each component of the aggregate. In this regard, we provide analytical conditions for the equality of forecasts. The results are applied to quarterly gross domestic product (GDP) data of several European countries of the euro area and to their aggregated GDP. This will be compared to the prediction obtained directly from modeling and forecasting the aggregate GDP of these European countries. In particular, we would like to check whether long‐run relationships between the levels of the components are useful for improving the forecasting accuracy of the aggregate growth rate. We will make forecasts at the country level and then pool them to obtain the forecast of the aggregate. The empirical analysis suggests that forecasts built by aggregating the country‐specific models are more accurate than forecasts constructed using the aggregated data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

12.
The conventional growth rate measures (such as month‐on‐month, year‐on‐year growth rates and 6‐month smoothed annualized rate adopted by the US Bureau of Labor Statistics and Economic Cycle Research Institute) are popular and can be easily obtained by computing the growth rate for monthly data based on a fixed comparison benchmark, although they do not make good use of the information underlying the economic series. By focusing on the monthly data, this paper proposes the k‐month kernel‐weighted annualized rate (k‐MKAR), which includes most existing growth rate measures as special cases. The proposed k‐MKAR measure involves the selection of smoothing parameters that are associated with the accuracy and timeliness for detecting the change in business turning points. That is, the comparison base is flexible and is likely to vary for different series under consideration. A data‐driven procedure depending upon the stepwise multiple reality check test for choosing the smoothing parameters is also suggested in this paper. The simple numerical evaluation and Monte Carlo experiment are conducted to confirm that our measures (in particular the two‐parameter k‐MKAR) improve the timeliness subject to a certain degree of accuracy. The business cycle signals issued by the Council for Economic Planning and Development over the period from 1998 to 2009 in Taiwan are taken as an example to illustrate the empirical application of our method. The empirical results show that the k‐MKAR‐based score lights are more capable of reflecting turning points earlier than the conventional year‐on‐year measure without sacrificing accuracy. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
We consider finite state-space non-homogeneous hidden Markov models for forecasting univariate time series. Given a set of predictors, the time series are modeled via predictive regressions with state-dependent coefficients and time-varying transition probabilities that depend on the predictors via a logistic/multinomial function. In a hidden Markov setting, inference for logistic regression coefficients becomes complicated and in some cases impossible due to convergence issues. In this paper, we aim to address this problem utilizing the recently proposed Pólya-Gamma latent variable scheme. Also, we allow for model uncertainty regarding the predictors that affect the series both linearly — in the mean — and non-linearly — in the transition matrix. Predictor selection and inference on the model parameters are based on an automatic Markov chain Monte Carlo scheme with reversible jump steps. Hence the proposed methodology can be used as a black box for predicting time series. Using simulation experiments, we illustrate the performance of our algorithm in various setups, in terms of mixing properties, model selection and predictive ability. An empirical study on realized volatility data shows that our methodology gives improved forecasts compared to benchmark models.  相似文献   

14.
In this study, time series analysis is applied to the problem of forecasting state income tax receipts. The data series is of special interest since it exhibits a strong trend with a high multiplicative seasonal component. An appropriate model is identified by simultaneous estimation of the parameters of the power transformation and the ARMA model using the Schwarz (1978) Bayesian information criterion. The forecasting performance of the time series model obtained from this procedure is compared with alternative time series and regression models. The study illustrates how an information criterion can be employed for identifying time series models that require a power transformation, as exemplified by state tax receipts. It also establishes time series analysis as a viable technique for forecasting state tax receipts.  相似文献   

15.
An approach is proposed for obtaining estimates of the basic (disaggregated) series, xi, when only an aggregate series, yt, of k period non-overlapping sums of xi's is available. The approach is based on casting the problem in a dynamic linear model form. Then estimates of xi can be obtained by application of the Kalman filtering techniques. An ad hoc procedure is introduced for deriving a model form for the unobserved basic series from the observed model of the aggregates. An application of this approach to a set of real data is given.  相似文献   

16.
Reid (1972) was among the first to argue that the relative accuracy of forecasting methods changes according to the properties of the time series. Comparative analyses of forecasting performance such as the M‐Competition tend to support this argument. The issue addressed here is the usefulness of statistics summarizing the data available in a time series in predicting the relative accuracy of different forecasting methods. Nine forecasting methods are described and the literature suggesting summary statistics for choice of forecasting method is summarized. Based on this literature and further argument a set of these statistics is proposed for the analysis. These statistics are used as explanatory variables in predicting the relative performance of the nine methods using a set of simulated time series with known properties. These results are evaluated on observed data sets, the M‐Competition data and Fildes Telecommunications data. The general conclusion is that the summary statistics can be used to select a good forecasting method (or set of methods) but not necessarily the best. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

17.
Wind power production data at temporal resolutions of a few minutes exhibit successive periods with fluctuations of various dynamic nature and magnitude, which cannot be explained (so far) by the evolution of some explanatory variable. Our proposal is to capture this regime‐switching behaviour with an approach relying on Markov‐switching autoregressive (MSAR) models. An appropriate parameterization of the model coefficients is introduced, along with an adaptive estimation method allowing accommodation of long‐term variations in the process characteristics. The objective criterion to be recursively optimized is based on penalized maximum likelihood, with exponential forgetting of past observations. MSAR models are then employed for one‐step‐ahead point forecasting of 10 min resolution time series of wind power at two large offshore wind farms. They are favourably compared against persistence and autoregressive models. It is finally shown that the main interest of MSAR models lies in their ability to generate interval/density forecasts of significantly higher skill. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

18.
Based on the concept of ‘decomposition and ensemble’, a novel ensemble forecasting approach is proposed for complex time series by coupling sparse representation (SR) and feedforward neural network (FNN), i.e. the SR‐based FNN approach. Three main steps are involved: data decomposition via SR, individual forecasting via FNN and ensemble forecasting via a simple addition method. In particular, to capture various coexisting hidden factors, the effective decomposition tool of SR with its unique virtues of flexibility and generalization is introduced to formulate an overcomplete dictionary covering diverse bases, e.g. exponential basis for main trend, Fourier basis for cyclical (and seasonal) features and wavelet basis for transient actions, different from other techniques with a single basis. Using crude oil price (a typical complex time series) as sample data, the empirical study statistically confirms the superiority of the SR‐based FNN method over some other popular forecasting models and similar ensemble models (with other decomposition tools). Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

19.
Forecasting for a time series of low counts, such as forecasting the number of patents to be awarded to an industry, is an important research topic in socio‐economic sectors. Recently (2004), Freeland and McCabe introduced a Gaussian type stationary correlation model‐based forecasting which appears to work well for the stationary time series of low counts. In practice, however, it may happen that the time series of counts will be non‐stationary and also the series may contain over‐dispersed counts. To develop the forecasting functions for this type of non‐stationary over‐dispersed data, the paper provides an extension of the stationary correlation models for Poisson counts to the non‐stationary correlation models for negative binomial counts. The forecasting methodology appears to work well, for example, for a US time series of polio counts, whereas the existing Bayesian methods of forecasting appear to encounter serious convergence problems. Further, a simulation study is conducted to examine the performance of the proposed forecasting functions, which appear to work well irrespective of whether the time series contains small or large counts. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
Subnational regional jurisdictions rarely have at their disposal a reasonable array of timely statistics to monitor their economic condition. In light of this, we develop a procedure that simultaneously estimates a quarterly time series for all regions of a country based upon quarterly national and annual regional data. While other such techniques exist, we suggest a temporal error structure that eliminates possible spurious jumps. Using our approach, regional analysts should now be able to distribute national growth among regions as soon as quarterly national figures are released. In a Spanish application, we detail some practicalities of the process and show that our proposal produces better estimates than the uniregional methods often used. Copyright © 2007 John Wiley & Sons. Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号