首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The growing affluence of the East and Southeast Asian economies has come about through a substantial increase in their economic links with the rest of the world, the OECD economies in particular. Econometric studies that try to quantify these links face a severe shortage of high‐frequency time series data for China and the group of ASEAN4 (Indonesia, Malaysia, Philippines and Thailand). In this paper we provide quarterly real GDP estimates for these countries derived by applying the Chow–Lin related series technique to annual real GDP series. The quality of the disaggregated series is evaluated through a number of indirect methods. Some potential problems of using readily available univariate disaggregation techniques are also highlighted. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

2.
Long series of quarterly GDP figures are still not available for many countries. This paper suggests an empirical procedure adapted from Chow and Lin (1971) to derive quarterly estimates from annual GDP figures and produces quarterly GDP by sectors for Malaysia from 1973Q1 onwards. A comparison of these estimates with some univariate interpolations using published quarterly figures for recent years show that the use of related series can produce substantially superior estimates of GDP compared to univariate methods. The data set is available from the authors. Copyright © 1998 John Wiley & Sons, Ltd.  相似文献   

3.
The paper presents a comparative real‐time analysis of alternative indirect estimates relative to monthly euro area employment. In the experiment quarterly employment is temporally disaggregated using monthly unemployment as related series. The strategies under comparison make use of the contribution of sectoral data of the euro area and its six larger member states. The comparison is carried out among univariate temporal disaggregations of the Chow and Lin type and multivariate structural time series models of small and medium size. Specifications in logarithms are also systematically assessed. All multivariate set‐ups, up to 49 series modelled simultaneously, are estimated via the EM algorithm. Main conclusions are that mean revision errors of disaggregated estimates are overall small, a gain is obtained when the model strategy takes into account the information by both sector and member state and that larger multivariate set‐ups perform very well, with several advantages with respect to simpler models.Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
Many applications in science involve finding estimates of unobserved variables from observed data, by combining model predictions with observations. The sequential Monte Carlo (SMC) is a well‐established technique for estimating the distribution of unobserved variables that are conditional on current observations. While the SMC is very successful at estimating the first central moments, estimating the extreme quantiles of a distribution via the current SMC methods is computationally very expensive. The purpose of this paper is to develop a new framework using probability distortion. We use an SMC with distorted weights in order to make computationally efficient inferences about tail probabilities of future interest rates using the Cox–Ingersoll–Ross (CIR) model, as well as with an observed yield curve. We show that the proposed method yields acceptable estimates about tail quantiles at a fraction of the computational cost of the full Monte Carlo.  相似文献   

5.
This paper presents a new spatial dependence model with an adjustment of feature difference. The model accounts for the spatial autocorrelation in both the outcome variables and residuals. The feature difference adjustment in the model helps to emphasize feature changes across neighboring units, while suppressing unobserved covariates that are present in the same neighborhood. The prediction at a given unit incorporates components that depend on the differences between the values of its main features and those of its neighboring units. In contrast to conventional spatial regression models, our model does not require a comprehensive list of global covariates necessary to estimate the outcome variable at the unit, as common macro-level covariates are differenced away in the regression analysis. Using the real estate market data in Hong Kong, we applied Gibbs sampling to determine the posterior distribution of each model parameter. The result of our empirical analysis confirms that the adjustment of feature difference with an inclusion of the spatial error autocorrelation produces better out-of-sample prediction performance than other conventional spatial dependence models. In addition, our empirical analysis can identify components with more significant contributions.  相似文献   

6.
In this paper we present an extensive study of annual GNP data for five European countries. We look for intercountry dependence and analyse how the different economies interact, using several univariate ARIMA and unobserved components models and a multivariate model for the GNP incorporating all the common information among the variables. We use a dynamic factor model to take account of the common dynamic structure of the variables. This common dynamic structure can be non‐stationary (i.e. common trends) or stationary (i.e. common cycles). Comparisons of the models are made in terms of the root mean square error (RMSE) for one‐step‐ahead forecasts. For this particular group of European countries, the factor model outperforms the remaining ones. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

7.
In Of Quadrature by Ordinates (1695), Isaac Newton tried two methods for obtaining the Newton–Cotes formulae. The first method is extrapolation and the second one is the method of undetermined coefficients using the quadrature of monomials. The first method provides $n$ -ordinate Newton–Cotes formulae only for cases in which $n=3,4$ and 5. However this method provides another important formulae if the ratios of errors are corrected. It is proved that the second method is correct and provides the Newton–Cotes formulae. Present significance of each of the methods is given.  相似文献   

8.
The power of Chow, linear, predictive failure and cusum of squares tests to detect structural change is compared in a two-variable random walk model and a once-for-all parameter shift model. In each case the linear test has greatest power, followed by the Chow test. It is suggested that the linear test be used as the basic general test for structural change in time series data, and tests of forecasting performance be confined to the last few observations. Analysis of recursive residuals and recursive parameter estimates should be regarded as forms of exploratory data analysis and tools for understanding discrepancies with previous results rather than a basis for formal tests of structural change.  相似文献   

9.
In this paper we propose and evaluate two new methods for the quantification of business surveys concerning the qualitative assessment of the state of the economy. The first is a nonparametric method based on the spectral envelope, originally proposed by Stoffer, Tyler and McDougall (Spectral analysis for categorical time series: scaling and the spectral envelope, Biometrika 80 : 611–622) to the multivariate time series of the counts in each response category. Secondly, we fit by maximum likelihood a cumulative logit unobserved components models featuring a common cycle. The conditional mean of the cycle, which can be evaluated by importance sampling, offers the required quantification. We assess the validity of the two methods by comparing the results with a standard quantification based on the balance of opinions and with a quantitative economic indicator. Copyright ? 2010 John Wiley & Sons, Ltd.  相似文献   

10.
We use state space methods to estimate a large dynamic factor model for the Norwegian economy involving 93 variables for 1978Q2–2005Q4. The model is used to obtain forecasts for 22 key variables that can be derived from the original variables by aggregation. To investigate the potential gain in using such a large information set, we compare the forecasting properties of the dynamic factor model with those of univariate benchmark models. We find that there is an overall gain in using the dynamic factor model, but that the gain is notable only for a few of the key variables. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
We present a methodology for estimation, prediction, and model assessment of vector autoregressive moving-average (VARMA) models in the Bayesian framework using Markov chain Monte Carlo algorithms. The sampling-based Bayesian framework for inference allows for the incorporation of parameter restrictions, such as stationarity restrictions or zero constraints, through appropriate prior specifications. It also facilitates extensive posterior and predictive analyses through the use of numerical summary statistics and graphical displays, such as box plots and density plots for estimated parameters. We present a method for computationally feasible evaluation of the joint posterior density of the model parameters using the exact likelihood function, and discuss the use of backcasting to approximate the exact likelihood function in certain cases. We also show how to incorporate indicator variables as additional parameters for use in coefficient selection. The sampling is facilitated through a Metropolis–Hastings algorithm. Graphical techniques based on predictive distributions are used for informal model assessment. The methods are illustrated using two data sets from business and economics. The first example consists of quarterly fixed investment, disposable income, and consumption rates for West Germany, which are known to have correlation and feedback relationships between series. The second example consists of monthly revenue data from seven different geographic areas of IBM. The revenue data exhibit seasonality, strong inter-regional dependence, and feedback relationships between certain regions.© 1997 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper different ways to identify the order of the Box–Jenkins transfer function model are discussed. The discussion concerns estimation of the impulse response weight function in the case of more than one input variable. It is found that most of the existing methods are either unsuitable when there is more than one input variable, or expensive or difficult to use. To overcome these deficiencies an extended regression method is proposed. The new method is based on the solution of some problems in connection with the use of the regression method. The impulse response weights are estimated by a biased regression estimator on variables transformed with respect to the noise model. To test the new approach a small simulation experiment has been performed. The results from the simulations indicate that the proposed method may be of value to the practitioner.  相似文献   

13.
This paper compares various ways of extracting macroeconomic information from a data‐rich environment for forecasting the yield curve using the Nelson–Siegel model. Five issues in extracting factors from a large panel of macro variables are addressed; namely, selection of a subset of the available information, incorporation of the forecast objective in constructing factors, specification of a multivariate forecast objective, data grouping before constructing factors, and selection of the number of factors in a data‐driven way. Our empirical results show that each of these features helps to improve forecast accuracy, especially for the shortest and longest maturities. Factor‐augmented methods perform well in relatively volatile periods, including the crisis period in 2008–9, when simpler models do not suffice. The macroeconomic information is exploited best by partial least squares methods, with principal component methods ranking second best. Reductions of mean squared prediction errors of 20–30% are attained, compared to the Nelson–Siegel model without macro factors. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
根据Shannon信息理论将纹理的方向特征定义为:图像中信号取值为随机分布的奇异值时方向变量的取值特征.根据这一定义,并结合Tamura方向特征的求取方法,文中对Contourlet变换系数的方向概率分布进行了研究,获得方向特征在Contourlet变换的父子子带间形成传递这一结论,在此基础上结合Contourlet隐Markov树(HMT)模型,建立了以隐状态变量分布为条件的方向隐变量的概率分布模型,即带方向特征的Contourlet HMT模型,给出了该模型的结构和训练方法.此外,通过基于所提出模型的无监督结合上下文信息的图像分割算法对合成图像和遥感图像的目标分割实验验证了所提出模型的有效性.  相似文献   

15.
Reid (1972) was among the first to argue that the relative accuracy of forecasting methods changes according to the properties of the time series. Comparative analyses of forecasting performance such as the M‐Competition tend to support this argument. The issue addressed here is the usefulness of statistics summarizing the data available in a time series in predicting the relative accuracy of different forecasting methods. Nine forecasting methods are described and the literature suggesting summary statistics for choice of forecasting method is summarized. Based on this literature and further argument a set of these statistics is proposed for the analysis. These statistics are used as explanatory variables in predicting the relative performance of the nine methods using a set of simulated time series with known properties. These results are evaluated on observed data sets, the M‐Competition data and Fildes Telecommunications data. The general conclusion is that the summary statistics can be used to select a good forecasting method (or set of methods) but not necessarily the best. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

16.
Temperature changes are known to affect the social and environmental determinants of health in various ways. Consequently, excess deaths as a result of extreme weather conditions may increase over the coming decades because of climate change. In this paper, the relationship between trends in mortality and trends in temperature change (as a proxy) is investigated using annual data and for specified (warm and cold) periods during the year in the UK. A thoughtful statistical analysis is implemented and a new stochastic, central mortality rate model is proposed. The new model encompasses the good features of the Lee and Carter (Journal of the American Statistical Association, 1992, 87: 659–671) model and its recent extensions, and for the very first time includes an exogenous factor which is a temperature‐related factor. The new model is shown to provide a significantly better‐fitting performance and more interpretable forecasts. An illustrative example of pricing a life insurance product is provided and discussed.  相似文献   

17.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

18.
The use of linear error correction models based on stationarity and cointegration analysis, typically estimated with least squares regression, is a common technique for financial time series prediction. In this paper, the same formulation is extended to a nonlinear error correction model using the idea of a kernel‐based implicit nonlinear mapping to a high‐dimensional feature space in which linear model formulations are specified. Practical expressions for the nonlinear regression are obtained in terms of the positive definite kernel function by solving a linear system. The nonlinear least squares support vector machine model is designed within the Bayesian evidence framework that allows us to find appropriate trade‐offs between model complexity and in‐sample model accuracy. From straightforward primal–dual reasoning, the Bayesian framework allows us to derive error bars on the prediction in a similar way as for linear models and to perform hyperparameter and input selection. Starting from the results of the linear modelling analysis, the Bayesian kernel‐based prediction is successfully applied to out‐of‐sample prediction of an aggregated equity price index for the European chemical sector. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper I take a sceptical view of the standard cosmological model and its variants, mainly on the following grounds: (i) The method of mathematical modelling that characterises modern natural philosophy—as opposed to Aristotle's—goes well with the analytic, piecemeal approach to physical phenomena adopted by Galileo, Newton and their followers, but it is hardly suited for application to the whole world. (ii) Einstein's first cosmological model (1917) was not prompted by the intimations of experience but by a desire to satisfy Mach's Principle. (iii) The standard cosmological model—a Friedmann–Lemaı̂tre–Robertson–Walker spacetime expanding with or without end from an initial singularity—is supported by the phenomena of redshifted light from distant sources and very nearly isotropic thermal background radiation provided that two mutually inconsistent physical theories are jointly brought to bear on these phenomena, viz the quantum theory of elementary particles and Einstein's theory of gravity. (iv) While the former is certainly corroborated by high-energy experiments conducted under conditions allegedly similar to those prevailing in the early world, precise tests of the latter involve applications of the Schwarzschild solution or the PPN formalism for which there is no room in a Friedmann–Lemaı̂tre–Robertson–Walker spacetime.  相似文献   

20.
This paper proposes new methods for ‘targeting’ factors estimated from a big dataset. We suggest that forecasts of economic variables can be improved by tuning factor estimates: (i) so that they are both more relevant for a specific target variable; and (ii) so that variables with considerable idiosyncratic noise are down‐weighted prior to factor estimation. Existing targeted factor methodologies are limited to estimating the factors with only one of these two objectives in mind. We therefore combine these ideas by providing new weighted principal components analysis (PCA) procedures and a targeted generalized PCA (TGPCA) procedure. These methods offer a flexible combination of both types of targeting that is new to the literature. We illustrate this empirically by forecasting a range of US macroeconomic variables, finding that our combined approach yields important improvements over competing methods, consistently surviving elimination in the model confidence set procedure. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号