首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Forecasting for a time series of low counts, such as forecasting the number of patents to be awarded to an industry, is an important research topic in socio‐economic sectors. Recently (2004), Freeland and McCabe introduced a Gaussian type stationary correlation model‐based forecasting which appears to work well for the stationary time series of low counts. In practice, however, it may happen that the time series of counts will be non‐stationary and also the series may contain over‐dispersed counts. To develop the forecasting functions for this type of non‐stationary over‐dispersed data, the paper provides an extension of the stationary correlation models for Poisson counts to the non‐stationary correlation models for negative binomial counts. The forecasting methodology appears to work well, for example, for a US time series of polio counts, whereas the existing Bayesian methods of forecasting appear to encounter serious convergence problems. Further, a simulation study is conducted to examine the performance of the proposed forecasting functions, which appear to work well irrespective of whether the time series contains small or large counts. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
In this study, time series analysis is applied to the problem of forecasting state income tax receipts. The data series is of special interest since it exhibits a strong trend with a high multiplicative seasonal component. An appropriate model is identified by simultaneous estimation of the parameters of the power transformation and the ARMA model using the Schwarz (1978) Bayesian information criterion. The forecasting performance of the time series model obtained from this procedure is compared with alternative time series and regression models. The study illustrates how an information criterion can be employed for identifying time series models that require a power transformation, as exemplified by state tax receipts. It also establishes time series analysis as a viable technique for forecasting state tax receipts.  相似文献   

3.
Forecasting for nonlinear time series is an important topic in time series analysis. Existing numerical algorithms for multi‐step‐ahead forecasting ignore accuracy checking, alternative Monte Carlo methods are also computationally very demanding and their accuracy is difficult to control too. In this paper a numerical forecasting procedure for nonlinear autoregressive time series models is proposed. The forecasting procedure can be used to obtain approximate m‐step‐ahead predictive probability density functions, predictive distribution functions, predictive mean and variance, etc. for a range of nonlinear autoregressive time series models. Examples in the paper show that the forecasting procedure works very well both in terms of the accuracy of the results and in the ability to deal with different nonlinear autoregressive time series models. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
We propose a wavelet neural network (neuro‐wavelet) model for the short‐term forecast of stock returns from high‐frequency financial data. The proposed hybrid model combines the capability of wavelets and neural networks to capture non‐stationary nonlinear attributes embedded in financial time series. A comparison study was performed on the predictive power of two econometric models and four recurrent neural network topologies. Several statistical measures were applied to the predictions and standard errors to evaluate the performance of all models. A Jordan net that used as input the coefficients resulting from a non‐decimated wavelet‐based multi‐resolution decomposition of an exogenous signal showed a consistent superior forecasting performance. Reasonable forecasting accuracy for the one‐, three‐ and five step‐ahead horizons was achieved by the proposed model. The procedure used to build the neuro‐wavelet model is reusable and can be applied to any high‐frequency financial series to specify the model characteristics associated with that particular series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

5.
A short‐term mixed‐frequency model is proposed to estimate and forecast Italian economic activity fortnightly. We introduce a dynamic one‐factor model with three frequencies (quarterly, monthly, and fortnightly) by selecting indicators that show significant coincident and leading properties and are representative of both demand and supply. We conduct an out‐of‐sample forecasting exercise and compare the prediction errors of our model with those of alternative models that do not include fortnightly indicators. We find that high‐frequency indicators significantly improve the real‐time forecasts of Italian gross domestic product (GDP); this result suggests that models exploiting the information available at different lags and frequencies provide forecasting gains beyond those based on monthly variables alone. Moreover, the model provides a new fortnightly indicator of GDP, consistent with the official quarterly series.  相似文献   

6.
Artificial neural network (ANN) combined with signal decomposing methods is effective for long‐term streamflow time series forecasting. ANN is a kind of machine learning method utilized widely for streamflow time series, and which performs well in forecasting nonstationary time series without the need of physical analysis for complex and dynamic hydrological processes. Most studies take multiple factors determining the streamflow as inputs such as rainfall. In this study, a long‐term streamflow forecasting model depending only on the historical streamflow data is proposed. Various preprocessing techniques, including empirical mode decomposition (EMD), ensemble empirical mode decomposition (EEMD) and discrete wavelet transform (DWT), are first used to decompose the streamflow time series into simple components with different timescale characteristics, and the relation between these components and the original streamflow at the next time step is analyzed by ANN. Hybrid models EMD‐ANN, EEMD‐ANN and DWT‐ANN are developed in this study for long‐term daily streamflow forecasting, and performance measures root mean square error (RMSE), mean absolute percentage error (MAPE) and Nash–Sutcliffe efficiency (NSE) indicate that the proposed EEMD‐ANN method performs better than EMD‐ANN and DWT‐ANN models, especially in high flow forecasting.  相似文献   

7.
This paper proposes a new forecasting method in which the cointegration rank switches at unknown times. In this method, time series observations are divided into several segments, and a cointegrated vector autoregressive model is fitted to each segment. The goodness of fit of the global model, consisting of local models with different cointegration ranks, is evaluated using the information criterion (IC). The division that minimizes the IC defines the best model. The results of an empirical application to the US term structure of interest rates and a Monte Carlo simulation suggest the efficacy as well as the limitations of the proposed method. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

8.
Trend and seasonality are the most prominent features of economic time series that are observed at the subannual frequency. Modeling these components serves a variety of analytical purposes, including seasonal adjustment and forecasting. In this paper we introduce unobserved components models for which both the trend and seasonal components arise from systematically sampling a multivariate transition equation, according to which each season evolves as a random walk with a drift. By modeling the disturbance covariance matrix we can encompass traditional models for seasonal time series, like the basic structural model, and can formulate more elaborate ones, dealing with season specific features, such as seasonal heterogeneity and correlation, along with the different role of the nonstationary cycles defined at the fundamental and the harmonic frequencies in determining the shape of the seasonal pattern.  相似文献   

9.
We develop a model to forecast the Federal Open Market Committee's (FOMC's) interest rate setting behavior in a nonstationary discrete choice model framework by Hu and Phillips (2004). We find that if the model selection criterion is strictly empirical, correcting for nonstationarity is extremely important, whereas it may not be an issue if one has an a priori model. Evaluating an array of models in terms of their out‐of‐sample forecasting ability, we find that those favored by the in‐sample criteria perform worst, while theory‐based models perform best. We find the best model for forecasting the FOMC's behavior is a forward‐looking Taylor rule model. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

11.
This paper examines the relative importance of allowing for time‐varying volatility and country interactions in a forecast model of economic activity. Allowing for these issues is done by augmenting autoregressive models of growth with cross‐country weighted averages of growth and the generalized autoregressive conditional heteroskedasticity framework. The forecasts are evaluated using statistical criteria through point and density forecasts, and an economic criterion based on forecasting recessions. The results show that, compared to an autoregressive model, both components improve forecast ability in terms of point and density forecasts, especially one‐period‐ahead forecasts, but that the forecast ability is not stable over time. The random walk model, however, still dominates in terms of forecasting recessions.  相似文献   

12.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
This paper examines the problem of forecasting macro‐variables which are observed monthly (or quarterly) and result from geographical and sectorial aggregation. The aim is to formulate a methodology whereby all relevant information gathered in this context could provide more accurate forecasts, be frequently updated, and include a disaggregated explanation as useful information for decision‐making. The appropriate treatment of the resulting disaggregated data set requires vector modelling, which captures the long‐run restrictions between the different time series and the short‐term correlations existing between their stationary transformations. Frequently, due to a lack of degrees of freedom, the vector model must be restricted to a block‐diagonal vector model. This methodology is applied in this paper to inflation in the euro area, and shows that disaggregated models with cointegration restrictions improve accuracy in forecasting aggregate macro‐variables. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
This paper models bond term premia empirically in terms of the maturity composition of the federal debt and other observable economic variables in a time‐varying framework with potential regime shifts. We present regression and out‐of sample forecasting results demonstrating that information on the age composition of the Federal debt is useful for forecasting term premia. We show that the multiprocess mixture model, a multi‐state time‐varying parameter model, outperforms the commonly used GARCH model in out‐of‐sample forecasts of term premia. The results underscore the importance of modelling term premia, as a function of economic variables rather than just as a function of asset covariances as in the conditional heteroscedasticity models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

16.
Multi-process models are particularly useful when observations appear extreme relative to their forecasts, because they allow for explanations of any behaviour of a time series, considering more generating sources simultaneously. In this paper, the multi-process approach is extended by developing a dynamic procedure to assess the weights of the various sources, alias the prior probabilities of the rival models, that compete in the collection to make forecasts. The new criterion helps the forecasting system to learn about the most plausible scenarios for the time series, considering all the combinations of consecutive models to be a function of the magnitude of the one-step-ahead forecast error. Throughout the paper, the different treatments of outliers and structural changes are highlighted using the concepts of robustness and sensitivity. Finally, the dynamic selection procedure is tested on the CP6 dataset, showing an effective improvement in the overall predictive ability of multi-process models whenever anomalous observations occur. © 1997 John Wiley & Sons, Ltd.  相似文献   

17.
Many publications on tourism forecasting have appeared during the past twenty years. The purpose of this article is to organize and summarize that scattered literature. General conclusions are also drawn from the studies to help those wishing to develop tourism forecasts of their own. The forecasting techniques discussed include time series models, econometric causal models, the gravity model and expert-opinion techniques. The major conclusions are that time series models are the simplest and least costly (and therefore most appropriate for practitioners); the gravity model is best suited to handle international tourism flows (and will be most useful to governments and tourism agencies); and expert-opinion methods are useful when data are unavailable. Further research is needed on the use of economic indicators in tourism forecasting, on the development of attractivity and emissiveness indexes for use in gravity and econometric models and on empirical comparisons among the different methods.  相似文献   

18.
The purpose of this paper is to apply the Box–Jenkins methodology to ARIMA models and determine the reasons why in empirical tests it is found that the post-sample forecasting the accuracy of such models is generally worse than much simpler time series methods. The paper concludes that the major problem is the way of making the series stationary in its mean (i.e. the method of differencing) that has been proposed by Box and Jenkins. If alternative approaches are utilized to remove and extrapolate the trend in the data, ARMA models outperform the models selected through Box–Jenkins methodology. In addition, it is shown that using ARMA models to seasonally adjusted data slightly improves post-sample accuracies while simplifying the use of ARMA models. It is also confirmed that transformations slightly improve post-sample forecasting accuracy, particularly for long forecasting horizons. Finally, it is demonstrated that AR(1), AR(2) and ARMA(1,1) models can produce more accurate post-sample forecasts than those found through the application of Box–Jenkins methodology.© 1997 John Wiley & Sons, Ltd.  相似文献   

19.
This paper presents a methodology for modelling and forecasting multivariate time series with linear restrictions using the constrained structural state‐space framework. The model has natural applications to forecasting time series of macroeconomic/financial identities and accounts. The explicit modelling of the constraints ensures that model parameters dynamically satisfy the restrictions among items of the series, leading to more accurate and internally consistent forecasts. It is shown that the constrained model offers superior forecasting efficiency. A testable identification condition for state space models is also obtained and applied to establish the identifiability of the constrained model. The proposed methods are illustrated on Germany's quarterly monetary accounts data. Results show significant improvement in the predictive efficiency of forecast estimators for the monetary account with an overall efficiency gain of 25% over unconstrained modelling. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper we forecast daily returns of crypto‐currencies using a wide variety of different econometric models. To capture salient features commonly observed in financial time series like rapid changes in the conditional variance, non‐normality of the measurement errors and sharply increasing trends, we develop a time‐varying parameter VAR with t‐distributed measurement errors and stochastic volatility. To control for overparametrization, we rely on the Bayesian literature on shrinkage priors, which enables us to shrink coefficients associated with irrelevant predictors and/or perform model specification in a flexible manner. Using around one year of daily data, we perform a real‐time forecasting exercise and investigate whether any of the proposed models is able to outperform the naive random walk benchmark. To assess the economic relevance of the forecasting gains produced by the proposed models we, moreover, run a simple trading exercise.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号