首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
This paper gives a brief survey of forecasting with panel data. It begins with a simple error component regression model and surveys the best linear unbiased prediction under various assumptions of the disturbance term. This includes various ARMA models as well as spatial autoregressive models. The paper also surveys how these forecasts have been used in panel data applications, running horse races between heterogeneous and homogeneous panel data models using out‐of‐sample forecasts. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
Improving the prediction accuracy of agricultural product futures prices is important for investors, agricultural producers, and policymakers. This is to evade risks and enable government departments to formulate appropriate agricultural regulations and policies. This study employs the ensemble empirical mode decomposition (EEMD) technique to decompose six different categories of agricultural futures prices. Subsequently, three models—support vector machine (SVM), neural network (NN), and autoregressive integrated moving average (ARIMA)—are used to predict the decomposition components. The final hybrid model is then constructed by comparing the prediction performance of the decomposition components. The predicting performance of the combination model is then compared with the benchmark individual models: SVM, NN, and ARIMA. Our main interest in this study is on short-term forecasting, and thus we only consider 1-day and 3-day forecast horizons. The results indicate that the prediction performance of the EEMD combined model is better than that of individual models, especially for the 3-day forecasting horizon. The study also concluded that the machine learning methods outperform the statistical methods in forecasting high-frequency volatile components. However, there is no obvious difference between individual models in predicting low-frequency components.  相似文献   

3.
The use of linear error correction models based on stationarity and cointegration analysis, typically estimated with least squares regression, is a common technique for financial time series prediction. In this paper, the same formulation is extended to a nonlinear error correction model using the idea of a kernel‐based implicit nonlinear mapping to a high‐dimensional feature space in which linear model formulations are specified. Practical expressions for the nonlinear regression are obtained in terms of the positive definite kernel function by solving a linear system. The nonlinear least squares support vector machine model is designed within the Bayesian evidence framework that allows us to find appropriate trade‐offs between model complexity and in‐sample model accuracy. From straightforward primal–dual reasoning, the Bayesian framework allows us to derive error bars on the prediction in a similar way as for linear models and to perform hyperparameter and input selection. Starting from the results of the linear modelling analysis, the Bayesian kernel‐based prediction is successfully applied to out‐of‐sample prediction of an aggregated equity price index for the European chemical sector. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

4.
This research proposes a prediction model of multistage financial distress (MSFD) after considering contextual and methodological issues regarding sampling, feature and model selection criteria. Financial distress is defined as a three‐stage process showing different nature and intensity of financial problems. It is argued that applied definition of distress is independent of legal framework and its predictability would provide more practical solutions. The final sample is selected after industry adjustments and oversampling the data. A wrapper subset data mining approach is applied to extract the most relevant features from financial statement and stock market indicators. An ensemble approach using a combination of DTNB (decision table and naïve base hybrid model), LMT (logistic model tree) and A2DE (alternative N dependence estimator) Bayesian models is used to develop the final prediction model. The performance of all the models is evaluated using a 10‐fold cross‐validation method. Results showed that the proposed model predicted MSFD with 84.06% accuracy. This accuracy increased to 89.57% when a 33.33% cut‐off value was considered. Hence the proposed model is accurate and reliable to identify the true nature and intensity of financial problems regardless of the contextual legal framework.  相似文献   

5.
For leverage heterogeneous autoregressive (LHAR) models with jumps and other covariates, called LHARX models, multistep forecasts are derived. Some optimal properties of forecasts in terms of conditional volatilities are discussed, which tells us to model conditional volatility for return but not for the LHARX regression error and other covariates. Forecast standard errors are constructed for which we need to model conditional volatilities both for return and for LHAR regression error and other blue covariates. The proposed methods are well illustrated by forecast analysis for the realized volatilities of the US stock price indexes: the S&P 500, the NASDAQ, the DJIA, and the RUSSELL indexes.  相似文献   

6.
We consider the problem of online prediction when it is uncertain what the best prediction model to use is. We develop a method called dynamic latent class model averaging, which combines a state‐space model for the parameters of each of the candidate models of the system with a Markov chain model for the best model. We propose a polychotomous regression model for the transition weights to assume that the probability of a change in time depends on the past through the values of the most recent time periods and spatial correlation among the regions. The evolution of the parameters in each submodel is defined by exponential forgetting. This structure allows the ‘correct’ model to vary over both time and regions. In contrast to existing methods, the proposed model naturally incorporates clustering and prediction analysis in a single unified framework. We develop an efficient Gibbs algorithm for computation, and we demonstrate the value of our framework on simulated experiments and on a real‐world problem: forecasting IBM's corporate revenue. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
This paper proposes to forecast foreign exchange rates by means of an error components‐seemingly unrelated nonlinear regression (EC‐SUNR) model and, simultaneously, explore the interrelationships among currencies from newly industrializing economies with those of highly industrialized countries. Based on the empirical results, we find that the EC‐SUNR model improves on the performance of forecasting foreign exchange rates in comparison with an intrinsically nonlinear dynamic speed of adjustment model that has been shown to outperform several other important models in the forecasting literature. We also find evidence showing that the foreign exchange markets of the newly industrializing countries are influenced by those of the highly industrialized countries and vice versa, and that such interrelationships affect the accuracy of currency forecasting. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we make multi‐step forecasts of the annual growth rates of the real gross regional product (GRP) for each of the 31 Chinese provinces simultaneously. Beside the usual panel data models, we use panel models that explicitly account for spatial dependence between the GRP growth rates. In addition, the possibility of spatial effects being different for different groups of provinces (Interior and Coast) is allowed for. We find that both pooling and accounting for spatial effects help substantially to improve the forecast performance compared to the benchmark models estimated for each of the provinces separately. It is also shown that the effect of accounting for spatial dependence is even more pronounced at longer forecasting horizons (the forecast accuracy gain as measured by the root mean squared forecast error is about 8% at the 1‐year horizon and exceeds 25% at the 13‐ and 14‐year horizons). Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

9.
With the development of artificial intelligence, deep learning is widely used in the field of nonlinear time series forecasting. It is proved in practice that deep learning models have higher forecasting accuracy compared with traditional linear econometric models and machine learning models. With the purpose of further improving forecasting accuracy of financial time series, we propose the WT-FCD-MLGRU model, which is the combination of wavelet transform, filter cycle decomposition and multilag neural networks. Four major stock indices are chosen to test the forecasting performance among traditional econometric model, machine learning model and deep learning models. According to the result of empirical analysis, deep learning models perform better than traditional econometric model such as autoregressive integrated moving average and improved machine learning model SVR. Besides, our proposed model has the minimum forecasting error in stock index prediction.  相似文献   

10.
This paper considers the generalized spatial panel data model with serial correlation proposed by Lee and Yu (Spatial panels: random components versus fixed effects. International Economic Review 2012; 53 : 1369–1412.), which encompasses many of the spatial panel data models considered in the literature, and derives the best linear unbiased predictor (BLUP) for that model. This in turn provides valuable BLUP for several spatial panel models as Special Cases. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

11.
Guesstimation     
Macroeconomic model builders attempting to construct forecasting models frequently face constraints of data scarcity in terms of short time series of data, and also of parameter non‐constancy and underspecification. Hence, a realistic alternative is often to guess rather than to estimate parameters of such models. This paper concentrates on repetitive guessing (drawing) parameters from iteratively changing distributions, with the straightforward objective function being that of minimization of squares of ex‐post prediction errors, weighted by penalty weights and subject to a learning process. The examples are those of a Monte Carlo analysis of a regression problem and of a dynamic disequilibrium model. It is also an example of an empirical econometric model of the Polish economy. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

12.
The Ohlson model is evaluated using quarterly data from stocks in the Dow Jones Index. A hierarchical Bayesian approach is developed to simultaneously estimate the unknown coefficients in the time series regression model for each company by pooling information across firms. Both estimation and prediction are carried out by the Markov chain Monte Carlo (MCMC) method. Our empirical results show that our forecast based on the hierarchical Bayes method is generally adequate for future prediction, and improves upon the classical method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

14.
In this article, we propose a regression model for sparse high‐dimensional data from aggregated store‐level sales data. The modeling procedure includes two sub‐models of topic model and hierarchical factor regressions. These are applied in sequence to accommodate high dimensionality and sparseness and facilitate managerial interpretation. First, the topic model is applied to aggregated data to decompose the daily aggregated sales volume of a product into sub‐sales for several topics by allocating each unit sale (“word” in text analysis) in a day (“document”) into a topic based on joint‐purchase information. This stage reduces the dimensionality of data inside topics because the topic distribution is nonuniform and product sales are mostly allocated into smaller numbers of topics. Next, the market response regression model for the topic is estimated from information about items in the same topic. The hierarchical factor regression model we introduce, based on canonical correlation analysis for original high‐dimensional sample spaces, further reduces the dimensionality within topics. Feature selection is then performed on the basis of the credible interval of the parameters' posterior density. Empirical results show that (i) our model allows managerial implications from topic‐wise market responses according to the particular context, and (ii) it performs better than do conventional category regressions in both in‐sample and out‐of‐sample forecasts.  相似文献   

15.
S J Arnold 《Experientia》1985,41(10):1296-1310
Quantitative genetic models of sexual selection have disproven some of the central tenets of both the handicap mechanism and the 'sexy son' hypothesis. These results suggest that the 'good genes' approach to sexual selection may generally lead to erroneous results. Runaway sexual selection seems possible under a wide variety of circumstances. Quantitative genetic models have revealed runaway processes for sexually selected attributes expressed in both sexes and for attributes of parental care. Furthermore, the runaway could occur simultaneously in a series of populations that straddle an environmental gradient. While the models support the feasibility of runaway processes, empirical studies are needed to evaluate whether runaways actually happen. Estimates of critical genetic parameters are particularly needed, as well as measures of natural and sexual selection acting on the same population. The models also show that sexual selection has tremendous potential to produce population differentiation, particularly in epigamic traits. Differentiation is promoted by indeterminancy of evolutionary outcome, transient differences among populations during the final slow approach to equilibrium, sampling drift among equilibrium populations, and the tendency of sexual selection to amplify geographic variation arising from spatial differences in natural selection. Recent work with two- and three-locus models of sexual selection has produced results that parallel the results of the polygenic models. Thus the feature of indeterminate equilibria (outcome dependent on initial conditions) is common to both types of model.  相似文献   

16.
Observing that a sequence of negative logarithms of 1‐year survival probabilities displays a linear relationship with the sequence of corresponding terms with a time lag of a certain number of years, we propose a simple linear regression to model and forecast mortality rates. Our model assuming the linearity between two mortality sequences with a time lag each other does not need to formulate the time trends of mortality rates across ages for mortality prediction. Moreover, the parameters of our model for a given age depend on the mortality rates for that age only. Therefore, whether the span of the study ages with the age included is widened or shortened will not affect the results of mortality fitting and forecasting for that age. In the empirical testing, the regression results using the mortality data for the UK, USA and Japan show a satisfactory goodness of fit, which convinces us of the appropriateness of the linear assumption. Empirical illustrations further show that our model's performances of fitting and forecasting mortality rates are quite satisfactory compared with the existing well‐known mortality models. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

17.
Four options for modeling and forecasting time series data containing increasing seasonal variation are discussed, including data transformations, double seasonal difference models and two kinds of transfer function-type ARIMA models employing seasonal dummy variables. An explanation is given for the typical ARIMA model identification analysis failing to identify double seasonal difference models for this kind of data. A logical process of selecting one option for a particular case is outlined, focusing on issues of linear versus non-linear increasing seasonal variation, and the level of stochastic versus deterministic behavior in a time series. Example models for the various options are presented for six time series, with point forecast and interval forecast comparisons. Interval forecasts from data-transformation models are found to generally be too wide and sometimes illogical in the dependence of their width on the point forecast level. Suspicion that maximum likelihood estimation of ARIMA models leads to excessive indications of unit roots in seasonal moving-average operators is reported.  相似文献   

18.
This paper uses the dynamic factor model framework, which accommodates a large cross‐section of macroeconomic time series, for forecasting regional house price inflation. In this study, we forecast house price inflation for five metropolitan areas of South Africa using principal components obtained from 282 quarterly macroeconomic time series in the period 1980:1 to 2006:4. The results, based on the root mean square errors of one to four quarters ahead out‐of‐sample forecasts over the period 2001:1 to 2006:4 indicate that, in the majority of the cases, the Dynamic Factor Model statistically outperforms the vector autoregressive models, using both the classical and the Bayesian treatments. We also consider spatial and non‐spatial specifications. Our results indicate that macroeconomic fundamentals in forecasting house price inflation are important. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
Two related‐variables selection methods for temporal disaggregation are proposed. In the first method, the hypothesis tests for a common feature (cointegration or serial correlation) are first performed. If there is a common feature between observed aggregated series and related variables, the conventional Chow–Lin procedure is applied. In the second method, alternative Chow–Lin disaggregating models with and without related variables are first estimated and the corresponding values of the Bayesian information criterion (BIC) are stored. It is determined on the basis of the selected model whether related variables should be included in the Chow–Lin model. The efficacy of these methods is examined via simulations and empirical applications. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
We investigate the predictive performance of various classes of value‐at‐risk (VaR) models in several dimensions—unfiltered versus filtered VaR models, parametric versus nonparametric distributions, conventional versus extreme value distributions, and quantile regression versus inverting the conditional distribution function. By using the reality check test of White (2000), we compare the predictive power of alternative VaR models in terms of the empirical coverage probability and the predictive quantile loss for the stock markets of five Asian economies that suffered from the 1997–1998 financial crisis. The results based on these two criteria are largely compatible and indicate some empirical regularities of risk forecasts. The Riskmetrics model behaves reasonably well in tranquil periods, while some extreme value theory (EVT)‐based models do better in the crisis period. Filtering often appears to be useful for some models, particularly for the EVT models, though it could be harmful for some other models. The CaViaR quantile regression models of Engle and Manganelli (2004) have shown some success in predicting the VaR risk measure for various periods, generally more stable than those that invert a distribution function. Overall, the forecasting performance of the VaR models considered varies over the three periods before, during and after the crisis. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号