首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

2.
The aim of this study was to forecast the Singapore gross domestic product (GDP) growth rate by employing the mixed‐data sampling (MIDAS) approach using mixed and high‐frequency financial market data from Singapore, and to examine whether the high‐frequency financial variables could better predict the macroeconomic variables. We adopt different time‐aggregating methods to handle the high‐frequency data in order to match the sampling rate of lower‐frequency data in our regression models. Our results showed that MIDAS regression using high‐frequency stock return data produced a better forecast of GDP growth rate than the other models, and the best forecasting performance was achieved by using weekly stock returns. The forecasting result was further improved by performing intra‐period forecasting.  相似文献   

3.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

4.
Value‐at‐risk (VaR) forecasting generally relies on a parametric density function of portfolio returns that ignores higher moments or assumes them constant. In this paper, we propose a simple approach to forecasting of a portfolio VaR. We employ the Gram‐Charlier expansion (GCE) augmenting the standard normal distribution with the first four moments, which are allowed to vary over time. In an extensive empirical study, we compare the GCE approach to other models of VaR forecasting and conclude that it provides accurate and robust estimates of the realized VaR. In spite of its simplicity, on our dataset GCE outperforms other estimates that are generated by both constant and time‐varying higher‐moments models. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

5.
Longevity risk has become one of the major risks facing the insurance and pensions markets globally. The trade in longevity risk is underpinned by accurate forecasting of mortality rates. Using techniques from macroeconomic forecasting we propose a dynamic factor model of mortality that fits and forecasts age‐specific mortality rates parsimoniously. We compare the forecasting quality of this model against the Lee–Carter model and its variants. Our results show the dynamic factor model generally provides superior forecasts when applied to international mortality data. We also show that existing multifactorial models have superior fit but their forecasting performance worsens as more factors are added. The dynamic factor approach used here can potentially be further improved upon by applying an appropriate stopping rule for the number of static and dynamic factors. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
We look at the problem of forecasting time series which are not normally distributed. An overall approach is suggested which works both on simulated data and on real data sets. The idea is intuitively attractive and has the considerable advantage that it can readily be understood by non-specialists. Our approach is based on ARMA methodology and our models are estimated via a likelihood procedure which takes into account the non-normality of the data. We examine in some detail the circumstances in which taking explicit account of the nonnormality improves the forecasting process in a significant way. Results from several simulated and real series are included.  相似文献   

7.
This paper studies some forms of LASSO‐type penalties in time series to reduce the dimensionality of the parameter space as well as to improve out‐of‐sample forecasting performance. In particular, we propose a method that we call WLadaLASSO (weighted lag adaptive LASSO), which assigns not only different weights to each coefficient but also further penalizes coefficients of higher‐lagged covariates. In our Monte Carlo implementation, the WLadaLASSO is superior in terms of covariate selection, parameter estimation precision and forecasting, when compared to both LASSO and adaLASSO, especially for a higher number of candidate lags and a stronger linear dependence between predictors. Empirical studies illustrate our approach for US risk premium and US inflation forecasting with good results. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
We propose a quantile regression approach to equity premium forecasting. Robust point forecasts are generated from a set of quantile forecasts using both fixed and time‐varying weighting schemes, thereby exploiting the entire distributional information associated with each predictor. Further gains are achieved by incorporating the forecast combination methodology into our quantile regression setting. Our approach using a time‐varying weighting scheme delivers statistically and economically significant out‐of‐sample forecasts relative to both the historical average benchmark and the combined predictive mean regression modeling approach. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper we compare several multi‐period volatility forecasting models, specifically from MIDAS and HAR families. We perform our comparisons in terms of out‐of‐sample volatility forecasting accuracy. We also consider combinations of the models' forecasts. Using intra‐daily returns of the BOVESPA index, we calculate volatility measures such as realized variance, realized power variation and realized bipower variation to be used as regressors in both models. Further, we use a nonparametric procedure for separately measuring the continuous sample path variation and the discontinuous jump part of the quadratic variation process. Thus MIDAS and HAR specifications with the continuous sample path and jump variability measures as separate regressors are estimated. Our results in terms of mean squared error suggest that regressors involving volatility measures which are robust to jumps (i.e. realized bipower variation and realized power variation) are better at forecasting future volatility. However, we find that, in general, the forecasts based on these regressors are not statistically different from those based on realized variance (the benchmark regressor). Moreover, we find that, in general, the relative forecasting performances of the three approaches (i.e. MIDAS, HAR and forecast combinations) are statistically equivalent. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
This article presents a novel neural network?based approach to the intra?day forecasting of call arrivals in call centres. We apply the method to individual time series of arrivals for different customer call groups. To train the model, we use historical call data from three months and, for each day, we aggregate the call volume in 288 intervals of 5 minutes. With these data, our method can be used for predicting the call volume in the next 5?minute interval using either previous real data or previous predictions to iteratively produce multi?step?ahead forecasts. We compare our approach with other conventional forecasting techniques. Experimental results provide factual evidence in favour of our approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

11.
We utilize mixed‐frequency factor‐MIDAS models for the purpose of carrying out backcasting, nowcasting, and forecasting experiments using real‐time data. We also introduce a new real‐time Korean GDP dataset, which is the focus of our experiments. The methodology that we utilize involves first estimating common latent factors (i.e., diffusion indices) from 190 monthly macroeconomic and financial series using various estimation strategies. These factors are then included, along with standard variables measured at multiple different frequencies, in various factor‐MIDAS prediction models. Our key empirical findings as follows. (i) When using real‐time data, factor‐MIDAS prediction models outperform various linear benchmark models. Interestingly, the “MSFE‐best” MIDAS models contain no autoregressive (AR) lag terms when backcasting and nowcasting. AR terms only begin to play a role in “true” forecasting contexts. (ii) Models that utilize only one or two factors are “MSFE‐best” at all forecasting horizons, but not at any backcasting and nowcasting horizons. In these latter contexts, much more heavily parametrized models with many factors are preferred. (iii) Real‐time data are crucial for forecasting Korean gross domestic product, and the use of “first available” versus “most recent” data “strongly” affects model selection and performance. (iv) Recursively estimated models are almost always “MSFE‐best,” and models estimated using autoregressive interpolation dominate those estimated using other interpolation methods. (v) Factors estimated using recursive principal component estimation methods have more predictive content than those estimated using a variety of other (more sophisticated) approaches. This result is particularly prevalent for our “MSFE‐best” factor‐MIDAS models, across virtually all forecast horizons, estimation schemes, and data vintages that are analyzed.  相似文献   

12.
Including disaggregate variables or using information extracted from the disaggregate variables into a forecasting model for an economic aggregate may improve forecasting accuracy. In this paper we suggest using the boosting method to select the disaggregate variables, which are most helpful in predicting an aggregate of interest. We conduct a simulation study to investigate the variable selection ability of this method. To assess the forecasting performance a recursive pseudo‐out‐of‐sample forecasting experiment for six key euro area macroeconomic variables is conducted. The results suggest that using boosting to select relevant predictors is a feasible and competitive approach in forecasting an aggregate. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
Time series of categorical data is not a widely studied research topic. Particularly, there is no available work on the Bayesian analysis of categorical time series processes. With the objective of filling that gap, in the present paper we consider the problem of Bayesian analysis including Bayesian forecasting for time series of categorical data, which is modelled by Pegram's mixing operator, applicable for both ordinal and nominal data structures. In particular, we consider Pegram's operator‐based autoregressive process for the analysis. Real datasets on infant sleep status are analysed for illustrations. We also illustrate that the Bayesian forecasting is more accurate than the corresponding frequentist's approach when we intend to forecast a large time gap ahead. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

14.
We question the ability of macroeconomic data to predict risk appetite and ‘flight‐to‐quality’ periods in the European credit market using a model inspired by the Markov switching literature. This model allows for a direct mapping of exogenous variables into state probabilities. We find that various surveys and transformed hard data have a forecasting power. We show that despite its depth, the 2008–2009 crisis should not be regarded as an unusual episode that would have to be modelled by an additional state. Finally, we show that our model outperforms a pure Markov switching model in terms of forecasting accuracy, thus clearly indicating that economic figures are helpful in forecasting the credit cycle. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
Micro panels characterized by large numbers of individuals observed over a short time period provide a rich source of information, but as yet there is only limited experience in using such data for forecasting. Existing simulation evidence supports the use of a fixed‐effects approach when forecasting but it is not based on a truly micro panel set‐up. In this study, we exploit the linkage of a representative survey of more than 250,000 Australians aged 45 and over to 4 years of hospital, medical and pharmaceutical records. The availability of panel health cost data allows the use of predictors based on fixed‐effects estimates designed to guard against possible omitted variable biases associated with unobservable individual specific effects. We demonstrate the preference towards fixed‐effects‐based predictors is unlikely to hold in many practical situations, including our models of health care costs. Simulation evidence with a micro panel set‐up adds support and additional insights to the results obtained in the application. These results are supportive of the use of the ordinary least squares predictor in a wide range of circumstances. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
17.
In this paper we consider a novel procedure to forecasting the US zero coupon bond yields for a continuum of maturities by using the methodology of nonparametric functional data analysis (NP‐FDA). We interpret the US yields as curves since the term structure of interest rates defines a relation between the yield of a bond and its maturity. Within the NP‐FDA approach, each curve is viewed as a functional random variable and the dynamics present in the sample are modeled without imposing any parametric structure. In order to evaluate forecast the performance of the proposed estimator, we consider forecast horizons h = 1,3,6,12… months and the results are compared with widely known benchmark models. Our estimates with NP‐FDA present predictive performance superior to its competitors in many situations considered, especially for short‐term maturities. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
We introduce a new strategy for the prediction of linear temporal aggregates; we call it ‘hybrid’ and study its performance using asymptotic theory. This scheme consists of carrying out model parameter estimation with data sampled at the highest available frequency and the subsequent prediction with data and models aggregated according to the forecasting horizon of interest. We develop explicit expressions that approximately quantify the mean square forecasting errors associated with the different prediction schemes and that take into account the estimation error component. These approximate estimates indicate that the hybrid forecasting scheme tends to outperform the so‐called ‘all‐aggregated’ approach and, in some instances, the ‘all‐disaggregated’ strategy that is known to be optimal when model selection and estimation errors are neglected. Unlike other related approximate formulas existing in the literature, those proposed in this paper are totally explicit and require neither assumptions on the second‐order stationarity of the sample nor Monte Carlo simulations for their evaluation. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

19.
Although both direct multi‐step‐ahead forecasting and iterated one‐step‐ahead forecasting are two popular methods for predicting future values of a time series, it is not clear that the direct method is superior in practice, even though from a theoretical perspective it has lower mean squared error (MSE). A given model can be fitted according to either a multi‐step or a one‐step forecast error criterion, and we show here that discrepancies in performance between direct and iterative forecasting arise chiefly from the method of fitting, and is dictated by the nuances of the model's misspecification. We derive new formulas for quantifying iterative forecast MSE, and present a new approach for assessing asymptotic forecast MSE. Finally, the direct and iterative methods are compared on a retail series, which illustrates the strengths and weaknesses of each approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
It is investigated whether euro area variables can be forecast better based on synthetic time series for the pre‐euro period or by using just data from Germany for the pre‐euro period. Our forecast comparison is based on quarterly data for the period 1970Q1–2003Q4 for 10 macroeconomic variables. The years 2000–2003 are used as forecasting period. A range of different univariate forecasting methods is applied. Some of them are based on linear autoregressive models and we also use some nonlinear or time‐varying coefficient models. It turns out that most variables which have a similar level for Germany and the euro area such as prices can be better predicted based on German data, while aggregated European data are preferable for forecasting variables which need considerable adjustments in their levels when joining German and European Monetary Union (EMU) data. These results suggest that for variables which have a similar level for Germany and the euro area it may be reasonable to consider the German pre‐EMU data for studying economic problems in the euro area. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号