首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper derives the best linear unbiased prediction (BLUP) for an unbalanced panel data model. Starting with a simple error component regression model with unbalanced panel data and random effects, it generalizes the BLUP derived by Taub (Journal of Econometrics, 1979, 10, 103–108) to unbalanced panels. Next it derives the BLUP for an unequally spaced panel data model with serial correlation of the AR(1) type in the remainder disturbances considered by Baltagi and Wu (Econometric Theory, 1999, 15, 814–823). This in turn extends the BLUP for a panel data model with AR(1) type remainder disturbances derived by Baltagi and Li (Journal of Forecasting, 1992, 11, 561–567) from the balanced to the unequally spaced panel data case. The derivations are easily implemented and reduce to tractable expressions using an extension of the Fuller and Battese (Journal of Econometrics, 1974, 2, 67–78) transformation from the balanced to the unbalanced panel data case.  相似文献   

2.
This paper gives a brief survey of forecasting with panel data. It begins with a simple error component regression model and surveys the best linear unbiased prediction under various assumptions of the disturbance term. This includes various ARMA models as well as spatial autoregressive models. The paper also surveys how these forecasts have been used in panel data applications, running horse races between heterogeneous and homogeneous panel data models using out‐of‐sample forecasts. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

3.
This paper considers the problem of forecasting in a panel data model with random individual effects and MA (q) remainder disturbances. It utilizes a recursive transformation for the MA (q) process derived by Baltagi and Li (Econometric Theory 1994; 10 : 396–408) which yields a simple generalized least‐squares estimator for this model. This recursive transformation is used in conjunction with Goldberger's result (Journal of the American Statistical Association 1962; 57 : 369–375) to derive an analytic expression for the best linear unbiased predictor, for the ith cross‐sectional unit, s periods ahead. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper, we make multi‐step forecasts of the annual growth rates of the real gross regional product (GRP) for each of the 31 Chinese provinces simultaneously. Beside the usual panel data models, we use panel models that explicitly account for spatial dependence between the GRP growth rates. In addition, the possibility of spatial effects being different for different groups of provinces (Interior and Coast) is allowed for. We find that both pooling and accounting for spatial effects help substantially to improve the forecast performance compared to the benchmark models estimated for each of the provinces separately. It is also shown that the effect of accounting for spatial dependence is even more pronounced at longer forecasting horizons (the forecast accuracy gain as measured by the root mean squared forecast error is about 8% at the 1‐year horizon and exceeds 25% at the 13‐ and 14‐year horizons). Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
This paper investigates the trade‐off between timeliness and quality in nowcasting practices. This trade‐off arises when the frequency of the variable to be nowcast, such as gross domestic product (GDP), is quarterly, while that of the underlying panel data is monthly; and the latter contains both survey and macroeconomic data. These two categories of data have different properties regarding timeliness and quality: the survey data are timely available (but might possess less predictive power), while the macroeconomic data possess more predictive power (but are not timely available because of their publication lags). In our empirical analysis, we use a modified dynamic factor model which takes three refinements for the standard dynamic factor model of Stock and Watson (Journal of Business and Economic Statistics, 2002, 20, 147–162) into account, namely mixed frequency, preselections and cointegration among the economic variables. Our main finding from a historical nowcasting simulation based on euro area GDP is that the predictive power of the survey data depends on the economic circumstances; namely, that survey data are more useful in tranquil times, and less so in times of turmoil.  相似文献   

6.
Micro panels characterized by large numbers of individuals observed over a short time period provide a rich source of information, but as yet there is only limited experience in using such data for forecasting. Existing simulation evidence supports the use of a fixed‐effects approach when forecasting but it is not based on a truly micro panel set‐up. In this study, we exploit the linkage of a representative survey of more than 250,000 Australians aged 45 and over to 4 years of hospital, medical and pharmaceutical records. The availability of panel health cost data allows the use of predictors based on fixed‐effects estimates designed to guard against possible omitted variable biases associated with unobservable individual specific effects. We demonstrate the preference towards fixed‐effects‐based predictors is unlikely to hold in many practical situations, including our models of health care costs. Simulation evidence with a micro panel set‐up adds support and additional insights to the results obtained in the application. These results are supportive of the use of the ordinary least squares predictor in a wide range of circumstances. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

7.
The increasing amount of attention paid to longevity risk and funding for old age has created the need for precise mortality models and accurate future mortality forecasts. Orthogonal polynomials have been widely used in technical fields and there have also been applications in mortality modeling. In this paper we adopt a flexible functional form approach using two‐dimensional Legendre orthogonal polynomials to fit and forecast mortality rates. Unlike some of the existing mortality models in the literature, the model we propose does not impose any restrictions on the age, time or cohort structure of the data and thus allows for different model designs for different countries' mortality experience. We conduct an empirical study using male mortality data from a range of developed countries and explore the possibility of using age–time effects to capture cohort effects in the underlying mortality data. It is found that, for some countries, cohort dummies still need to be incorporated into the model. Moreover, when comparing the proposed model with well‐known mortality models in the literature, we find that our model provides comparable fitting but with a much smaller number of parameters. Based on 5‐year‐ahead mortality forecasts, it can be concluded that the proposed model improves the overall accuracy of the future mortality projection. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
This paper derives the best linear unbiased predictor for an unbalanced nested error components panel data model. This predictor is useful in many econometric applications that are usually based on unbalanced panel data and have a nested (hierarchical) structure. Examples include predicting student performance in a class in a school, or house prices in a neighborhood in a county or a state. Using Monte Carlo simulations, we show that this predictor is better in root mean square error performance than the usual fixed‐ or random‐effects predictors ignoring the nested structure of the data. This is applied to forecasting the productivity of public capital in the private sector using nested panel data of 48 contiguous American states. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

9.
首先构建了企业创新的框架体系,将企业创新分为自主研发与协同创新,然后基于高技术产业科技投入产出数据,采用面板数据模型和面板门槛模型分析了自主研发与协同创新弹性系数及其门槛特点,建立了自主研发自身门槛模型、协同创新自身门槛模型、创新产出对自主研发门槛模型、创新产出对协同创新门槛模型、科技人力资源对自主研发门槛模型。研究结果表明,自主研发的绩效较高,具有规模经济效应;协同创新绩效总体较低,较低水平地区绩效较高;科技人力资源绩效总体不高,科技人力资源投入较低地区自主研发绩效较高。  相似文献   

10.
A new clustered correlation multivariate generalized autoregressive conditional heteroskedasticity (CC‐MGARCH) model that allows conditional correlations to form clusters is proposed. This model generalizes the time‐varying correlation structure of Tse and Tsui (2002, Journal of Business and Economic Statistics 20 : 351–361) by classifying the correlations among the series into groups. To estimate the proposed model, Markov chain Monte Carlo methods are adopted. Two efficient sampling schemes for drawing discrete indicators are also developed. Simulations show that these efficient sampling schemes can lead to substantial savings in computation time in Monte Carlo procedures involving discrete indicators. Empirical examples using stock market and exchange rate data are presented in which two‐cluster and three‐cluster models are selected using posterior probabilities. This implies that the conditional correlation equation is likely to be governed by more than one set of decaying parameters. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

11.
In examining stochastic models for commodity prices, central questions often revolve around time‐varying trend, stochastic convenience yield and volatility, and mean reversion. This paper seeks to assess and compare alternative approaches to modelling these effects, with focus on forecast performance. Three specifications are considered: (i) random‐walk models with GARCH and normal or Student‐t innovations; (ii) Poisson‐based jump‐diffusion models with GARCH and normal or Student‐t innovations; and (iii) mean‐reverting models that allow for uncertainty in equilibrium price. Our empirical application makes use of aluminium spot and futures price series at daily and weekly frequencies. Results show: (i) models with stochastic convenience yield outperform all other competing models, and for all forecast horizons; (ii) the use of futures prices does not always yield lower forecast error values compared to the use of spot prices; and (iii) within the class of (G)ARCH random‐walk models, no model uniformly dominates the other. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
This paper presents a new spatial dependence model with an adjustment of feature difference. The model accounts for the spatial autocorrelation in both the outcome variables and residuals. The feature difference adjustment in the model helps to emphasize feature changes across neighboring units, while suppressing unobserved covariates that are present in the same neighborhood. The prediction at a given unit incorporates components that depend on the differences between the values of its main features and those of its neighboring units. In contrast to conventional spatial regression models, our model does not require a comprehensive list of global covariates necessary to estimate the outcome variable at the unit, as common macro-level covariates are differenced away in the regression analysis. Using the real estate market data in Hong Kong, we applied Gibbs sampling to determine the posterior distribution of each model parameter. The result of our empirical analysis confirms that the adjustment of feature difference with an inclusion of the spatial error autocorrelation produces better out-of-sample prediction performance than other conventional spatial dependence models. In addition, our empirical analysis can identify components with more significant contributions.  相似文献   

13.
In time-series analysis, a model is rarely pre-specified but rather is typically formulated in an iterative, interactive way using the given time-series data. Unfortunately the properties of the fitted model, and the forecasts from it, are generally calculated as if the model were known in the first place. This is theoretically incorrect, as least squares theory, for example, does not apply when the same data are used to formulates and fit a model. Ignoring prior model selection leads to biases, not only in estimates of model parameters but also in the subsequent construction of prediction intervals. The latter are typically too narrow, partly because they do not allow for model uncertainty. Empirical results also suggest that more complicated models tend to give a better fit but poorer ex-ante forecasts. The reasons behind these phenomena are reviewed. When comparing different forecasting models, the BIC is preferred to the AIC for identifying a model on the basis of within-sample fit, but out-of-sample forecasting accuracy provides the real test. Alternative approaches to forecasting, which avoid conditioning on a single model, include Bayesian model averaging and using a forecasting method which is not model-based but which is designed to be adaptable and robust.  相似文献   

14.
This paper discusses the forecasting performance of alternative factor models based on a large panel of quarterly time series for the German economy. One model extracts factors by static principal components analysis; the second model is based on dynamic principal components obtained using frequency domain methods; the third model is based on subspace algorithms for state‐space models. Out‐of‐sample forecasts show that the forecast errors of the factor models are on average smaller than the errors of a simple autoregressive benchmark model. Among the factor models, the dynamic principal component model and the subspace factor model outperform the static factor model in most cases in terms of mean‐squared forecast error. However, the forecast performance depends crucially on the choice of appropriate information criteria for the auxiliary parameters of the models. In the case of misspecification, rankings of forecast performance can change severely. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper, we examine the use of non‐parametric Neural Network Regression (NNR) and Recurrent Neural Network (RNN) regression models for forecasting and trading currency volatility, with an application to the GBP/USD and USD/JPY exchange rates. Both the results of the NNR and RNN models are benchmarked against the simpler GARCH alternative and implied volatility. Two simple model combinations are also analysed. The intuitively appealing idea of developing a nonlinear nonparametric approach to forecast FX volatility, identify mispriced options and subsequently develop a trading strategy based upon this process is implemented for the first time on a comprehensive basis. Using daily data from December 1993 through April 1999, we develop alternative FX volatility forecasting models. These models are then tested out‐of‐sample over the period April 1999–May 2000, not only in terms of forecasting accuracy, but also in terms of trading efficiency: in order to do so, we apply a realistic volatility trading strategy using FX option straddles once mispriced options have been identified. Allowing for transaction costs, most trading strategies retained produce positive returns. RNN models appear as the best single modelling approach yet, somewhat surprisingly, model combination which has the best overall performance in terms of forecasting accuracy, fails to improve the RNN‐based volatility trading results. Another conclusion from our results is that, for the period and currencies considered, the currency option market was inefficient and/or the pricing formulae applied by market participants were inadequate. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

16.
Guesstimation     
Macroeconomic model builders attempting to construct forecasting models frequently face constraints of data scarcity in terms of short time series of data, and also of parameter non‐constancy and underspecification. Hence, a realistic alternative is often to guess rather than to estimate parameters of such models. This paper concentrates on repetitive guessing (drawing) parameters from iteratively changing distributions, with the straightforward objective function being that of minimization of squares of ex‐post prediction errors, weighted by penalty weights and subject to a learning process. The examples are those of a Monte Carlo analysis of a regression problem and of a dynamic disequilibrium model. It is also an example of an empirical econometric model of the Polish economy. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

17.
This paper compares various ways of extracting macroeconomic information from a data‐rich environment for forecasting the yield curve using the Nelson–Siegel model. Five issues in extracting factors from a large panel of macro variables are addressed; namely, selection of a subset of the available information, incorporation of the forecast objective in constructing factors, specification of a multivariate forecast objective, data grouping before constructing factors, and selection of the number of factors in a data‐driven way. Our empirical results show that each of these features helps to improve forecast accuracy, especially for the shortest and longest maturities. Factor‐augmented methods perform well in relatively volatile periods, including the crisis period in 2008–9, when simpler models do not suffice. The macroeconomic information is exploited best by partial least squares methods, with principal component methods ranking second best. Reductions of mean squared prediction errors of 20–30% are attained, compared to the Nelson–Siegel model without macro factors. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

18.
In recent years, factor models have received increasing attention from both econometricians and practitioners in the forecasting of macroeconomic variables. In this context, Bai and Ng (Journal of Econometrics 2008; 146 : 304–317) find an improvement in selecting indicators according to the forecast variable prior to factor estimation (targeted predictors). In particular, they propose using the LARS‐EN algorithm to remove irrelevant predictors. In this paper, we adapt the Bai and Ng procedure to a setup in which data releases are delayed and staggered. In the pre‐selection step, we replace actual data with estimates obtained on the basis of past information, where the structure of the available information replicates the one a forecaster would face in real time. We estimate on the reduced dataset the dynamic factor model of Giannone et al. (Journal of Monetary Economics 2008; 55 : 665–676) and Doz et al. (Journal of Econometrics 2011; 164 : 188–205), which is particularly suitable for the very short‐term forecast of GDP. A pseudo real‐time evaluation on French data shows the potential of our approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

19.
This paper uses the dynamic factor model framework, which accommodates a large cross‐section of macroeconomic time series, for forecasting regional house price inflation. In this study, we forecast house price inflation for five metropolitan areas of South Africa using principal components obtained from 282 quarterly macroeconomic time series in the period 1980:1 to 2006:4. The results, based on the root mean square errors of one to four quarters ahead out‐of‐sample forecasts over the period 2001:1 to 2006:4 indicate that, in the majority of the cases, the Dynamic Factor Model statistically outperforms the vector autoregressive models, using both the classical and the Bayesian treatments. We also consider spatial and non‐spatial specifications. Our results indicate that macroeconomic fundamentals in forecasting house price inflation are important. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Financial data series are often described as exhibiting two non‐standard time series features. First, variance often changes over time, with alternating phases of high and low volatility. Such behaviour is well captured by ARCH models. Second, long memory may cause a slower decay of the autocorrelation function than would be implied by ARMA models. Fractionally integrated models have been offered as explanations. Recently, the ARFIMA–ARCH model class has been suggested as a way of coping with both phenomena simultaneously. For estimation we implement the bias correction of Cox and Reid ( 1987 ). For daily data on the Swiss 1‐month Euromarket interest rate during the period 1986–1989, the ARFIMA–ARCH (5,d,2/4) model with non‐integer d is selected by AIC. Model‐based out‐of‐sample forecasts for the mean are better than predictions based on conditionally homoscedastic white noise only for longer horizons (τ > 40). Regarding volatility forecasts, however, the selected ARFIMA–ARCH models dominate. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号