首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

2.
Recently developed structural models of the global crude oil market imply that the surge in the real price of oil between mid 2003 and mid 2008 was driven by repeated positive shocks to the demand for all industrial commodities, reflecting unexpectedly high growth mainly in emerging Asia. We evaluate this proposition using an alternative data source and a different econometric methodology. Rather than inferring demand shocks from an econometric model, we utilize a direct measure of global demand shocks based on revisions of professional real gross domestic product (GDP) growth forecasts. We show that forecast surprises during 2003–2008 were associated primarily with unexpected growth in emerging economies (in conjunction with much smaller positive GDP‐weighted forecast surprises in the major industrialized economies), that markets were repeatedly surprised by the strength of this growth, that these surprises were associated with a hump‐shaped response of the real price of oil that reaches its peak after 12–16 months, and that news about global growth predict much of the surge in the real price of oil from mid 2003 until mid 2008 and much of its subsequent decline. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

3.
The difficulty in modelling inflation and the significance in discovering the underlying data‐generating process of inflation is expressed in an extensive literature regarding inflation forecasting. In this paper we evaluate nonlinear machine learning and econometric methodologies in forecasting US inflation based on autoregressive and structural models of the term structure. We employ two nonlinear methodologies: the econometric least absolute shrinkage and selection operator (LASSO) and the machine‐learning support vector regression (SVR) method. The SVR has never been used before in inflation forecasting considering the term spread as a regressor. In doing so, we use a long monthly dataset spanning the period 1871:1–2015:3 that covers the entire history of inflation in the US economy. For comparison purposes we also use ordinary least squares regression models as a benchmark. In order to evaluate the contribution of the term spread in inflation forecasting in different time periods, we measure the out‐of‐sample forecasting performance of all models using rolling window regressions. Considering various forecasting horizons, the empirical evidence suggests that the structural models do not outperform the autoregressive ones, regardless of the model's method. Thus we conclude that the term spread models are not more accurate than autoregressive models in inflation forecasting. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
In this paper we forecast daily returns of crypto‐currencies using a wide variety of different econometric models. To capture salient features commonly observed in financial time series like rapid changes in the conditional variance, non‐normality of the measurement errors and sharply increasing trends, we develop a time‐varying parameter VAR with t‐distributed measurement errors and stochastic volatility. To control for overparametrization, we rely on the Bayesian literature on shrinkage priors, which enables us to shrink coefficients associated with irrelevant predictors and/or perform model specification in a flexible manner. Using around one year of daily data, we perform a real‐time forecasting exercise and investigate whether any of the proposed models is able to outperform the naive random walk benchmark. To assess the economic relevance of the forecasting gains produced by the proposed models we, moreover, run a simple trading exercise.  相似文献   

5.
Successful market timing strategies depend on superior forecasting ability. We use a sentiment index model, a kitchen sink logistic regression model, and a machine learning model (least absolute shrinkage and selection operator, LASSO) to forecast 1‐month‐ahead S&P 500 Index returns. In order to determine how successful each strategy is at forecasting the market direction, a “beta optimization” strategy is implemented. We find that the LASSO model outperforms the other models with consistently higher annual returns and lower monthly drawdowns.  相似文献   

6.
In this paper, we forecast EU area inflation with many predictors using time‐varying parameter models. The facts that time‐varying parameter models are parameter rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time‐varying parameter models. Our approach allows for the coefficient on each predictor to be: (i) time varying; (ii) constant over time; or (iii) shrunk to zero. The econometric methodology decides automatically to which category each coefficient belongs. Our empirical results indicate the benefits of such an approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper, we forecast local currency debt of five major emerging market countries (Brazil, Indonesia, Mexico, South Africa, and Turkey) over the period January 2010 to January 2019 (with an in-sample period: March 2005 to December 2009). We exploit information from a large set of economic and financial time series to assess the importance not only of “own-country” factors (derived from principal component and partial least squares approaches), but also create “global” predictors by combining the country-specific variables across the five emerging economies. We find that, while information on own-country factors can outperform the historical average model, global factors tend to produce not only greater statistical and economic gains, but also enhance market timing ability of investors, especially when we use the target variable (bond premium) approach under the partial least squares method to extract our factors. Our results have important implications not only for fund managers but also for policymakers.  相似文献   

8.
A large number of models have been developed in the literature to analyze and forecast changes in output dynamics. The objective of this paper was to compare the predictive ability of univariate and bivariate models, in terms of forecasting US gross national product (GNP) growth at different forecasting horizons, with the bivariate models containing information on a measure of economic uncertainty. Based on point and density forecast accuracy measures, as well as on equal predictive ability (EPA) and superior predictive ability (SPA) tests, we evaluate the relative forecasting performance of different model specifications over the quarterly period of 1919:Q2 until 2014:Q4. We find that the economic policy uncertainty (EPU) index should improve the accuracy of US GNP growth forecasts in bivariate models. We also find that the EPU exhibits similar forecasting ability to the term spread and outperforms other uncertainty measures such as the volatility index and geopolitical risk in predicting US recessions. While the Markov switching time‐varying parameter vector autoregressive model yields the lowest values for the root mean squared error in most cases, we observe relatively low values for the log predictive density score, when using the Bayesian vector regression model with stochastic volatility. More importantly, our results highlight the importance of uncertainty in forecasting US GNP growth rates.  相似文献   

9.
We compare the predictive ability of Bayesian methods which deal simultaneously with model uncertainty and correlated regressors in the framework of cross‐country growth regressions. In particular, we assess methods with spike and slab priors combined with different prior specifications for the slope parameters in the slab. Our results indicate that moving away from Gaussian g‐priors towards Bayesian ridge, LASSO or elastic net specifications has clear advantages for prediction when dealing with datasets of (potentially highly) correlated regressors, a pervasive characteristic of the data used hitherto in the econometric literature. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

10.
We utilize mixed‐frequency factor‐MIDAS models for the purpose of carrying out backcasting, nowcasting, and forecasting experiments using real‐time data. We also introduce a new real‐time Korean GDP dataset, which is the focus of our experiments. The methodology that we utilize involves first estimating common latent factors (i.e., diffusion indices) from 190 monthly macroeconomic and financial series using various estimation strategies. These factors are then included, along with standard variables measured at multiple different frequencies, in various factor‐MIDAS prediction models. Our key empirical findings as follows. (i) When using real‐time data, factor‐MIDAS prediction models outperform various linear benchmark models. Interestingly, the “MSFE‐best” MIDAS models contain no autoregressive (AR) lag terms when backcasting and nowcasting. AR terms only begin to play a role in “true” forecasting contexts. (ii) Models that utilize only one or two factors are “MSFE‐best” at all forecasting horizons, but not at any backcasting and nowcasting horizons. In these latter contexts, much more heavily parametrized models with many factors are preferred. (iii) Real‐time data are crucial for forecasting Korean gross domestic product, and the use of “first available” versus “most recent” data “strongly” affects model selection and performance. (iv) Recursively estimated models are almost always “MSFE‐best,” and models estimated using autoregressive interpolation dominate those estimated using other interpolation methods. (v) Factors estimated using recursive principal component estimation methods have more predictive content than those estimated using a variety of other (more sophisticated) approaches. This result is particularly prevalent for our “MSFE‐best” factor‐MIDAS models, across virtually all forecast horizons, estimation schemes, and data vintages that are analyzed.  相似文献   

11.
The aim of this study was to forecast the Singapore gross domestic product (GDP) growth rate by employing the mixed‐data sampling (MIDAS) approach using mixed and high‐frequency financial market data from Singapore, and to examine whether the high‐frequency financial variables could better predict the macroeconomic variables. We adopt different time‐aggregating methods to handle the high‐frequency data in order to match the sampling rate of lower‐frequency data in our regression models. Our results showed that MIDAS regression using high‐frequency stock return data produced a better forecast of GDP growth rate than the other models, and the best forecasting performance was achieved by using weekly stock returns. The forecasting result was further improved by performing intra‐period forecasting.  相似文献   

12.
The forecasting of prices for electricity balancing reserve power can essentially improve the trading positions of market participants in competitive auctions. Having identified a lack of literature related to forecasting balancing reserve prices, we deploy approaches originating from econometrics and artificial intelligence and set up a forecasting framework based on autoregressive and exogenous factors. We use SARIMAX models as well as neural networks with different structures and forecast based on a rolling one-step forecast with reestimation of the models. It turns out that the naive forecast performs reasonably well but is outperformed by the more advanced models. In addition, neural network approaches outperform the econometric approach in terms of forecast quality, whereas for the further use of the generated models the econometric approach has advantages in terms of explaining price drivers. For the present application, more advanced configurations of the neural networks are not able to further improve the forecasting performance.  相似文献   

13.
The purpose of this paper is to investigate the applicability of a contemporary time series forecasting technique, transfer function modeling, to the problem of forecasting sectoral employment levels in small regional economies. The specific sectoral employment levels to be forecast are manufacturing, durable manufacturing, non-durable manufacturing and non-manufacturing employment. Due to data constraints at the small region level, construction of traditional causal econometric models is often very difficult; thus time series approaches become particularly attractive. The results suggest that transfer function models using readily available national indicator series as drivers can provide more accurate forecasts of small region sectoral employment levels than univariate time series models.  相似文献   

14.
Predicting the future evolution of GDP growth and inflation is a central concern in economics. Forecasts are typically produced either from economic theory‐based models or from simple linear time series models. While a time series model can provide a reasonable benchmark to evaluate the value added of economic theory relative to the pure explanatory power of the past behavior of the variable, recent developments in time series analysis suggest that more sophisticated time series models could provide more serious benchmarks for economic models. In this paper we evaluate whether these complicated time series models can outperform standard linear models for forecasting GDP growth and inflation. We consider a large variety of models and evaluation criteria, using a bootstrap algorithm to evaluate the statistical significance of our results. Our main conclusion is that in general linear time series models can hardly be beaten if they are carefully specified. However, we also identify some important cases where the adoption of a more complicated benchmark can alter the conclusions of economic analyses about the driving forces of GDP growth and inflation. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
This paper investigates the time-varying volatility patterns of some major commodities as well as the potential factors that drive their long-term volatility component. For this purpose, we make use of a recently proposed generalized autoregressive conditional heteroskedasticity–mixed data sampling approach, which typically allows us to examine the role of economic and financial variables of different frequencies. Using commodity futures for Crude Oil (WTI and Brent), Gold, Silver and Platinum, as well as a commodity index, our results show the necessity for disentangling the short-term and long-term components in modeling and forecasting commodity volatility. They also indicate that the long-term volatility of most commodity futures is significantly driven by the level of global real economic activity as well as changes in consumer sentiment, industrial production, and economic policy uncertainty. However, the forecasting results are not alike across commodity futures as no single model fits all commodities.  相似文献   

16.
Mortality models used for forecasting are predominantly based on the statistical properties of time series and do not generally incorporate an understanding of the forces driving secular trends. This paper addresses three research questions: Can the factors found in stochastic mortality‐forecasting models be associated with real‐world trends in health‐related variables? Does inclusion of health‐related factors in models improve forecasts? Do resulting models give better forecasts than existing stochastic mortality models? We consider whether the space spanned by the latent factor structure in mortality data can be adequately described by developments in gross domestic product, health expenditure and lifestyle‐related risk factors using statistical techniques developed in macroeconomics and finance. These covariates are then shown to improve forecasts when incorporated into a Bayesian hierarchical model. Results are comparable or better than benchmark stochastic mortality models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
This paper develops a dynamic factor model that uses euro area country-specific information on output and inflation to estimate an area-wide measure of the output gap. Our model assumes that output and inflation can be decomposed into country-specific stochastic trends and a common cyclical component. Comovement in the trends is introduced by imposing a factor structure on the shocks to the latent states. We moreover introduce flexible stochastic volatility specifications to control for heteroscedasticity in the measurement errors and innovations to the latent states. Carefully specified shrinkage priors allow for pushing the model towards a homoscedastic specification, if supported by the data. Our measure of the output gap closely tracks other commonly adopted measures, with small differences in magnitudes and timing. To assess whether the model-based output gap helps in forecasting inflation, we perform an out-of-sample forecasting exercise. The findings indicate that our approach yields superior inflation forecasts, both in terms of point and density predictions.  相似文献   

18.
This paper examines whether the disaggregation of consumer sentiment data into its sub‐components improves the real‐time capacity to forecast GDP and consumption. A Bayesian error correction approach augmented with the consumer sentiment index and permutations of the consumer sentiment sub‐indices is used to evaluate forecasting power. The forecasts are benchmarked against both composite forecasts and forecasts from standard error correction models. Using Australian data, we find that consumer sentiment data increase the accuracy of GDP and consumption forecasts, with certain components of consumer sentiment consistently providing better forecasts than aggregate consumer sentiment data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

19.
With the development of artificial intelligence, deep learning is widely used in the field of nonlinear time series forecasting. It is proved in practice that deep learning models have higher forecasting accuracy compared with traditional linear econometric models and machine learning models. With the purpose of further improving forecasting accuracy of financial time series, we propose the WT-FCD-MLGRU model, which is the combination of wavelet transform, filter cycle decomposition and multilag neural networks. Four major stock indices are chosen to test the forecasting performance among traditional econometric model, machine learning model and deep learning models. According to the result of empirical analysis, deep learning models perform better than traditional econometric model such as autoregressive integrated moving average and improved machine learning model SVR. Besides, our proposed model has the minimum forecasting error in stock index prediction.  相似文献   

20.
We develop a Bayesian vector autoregressive (VAR) model with multivariate stochastic volatility that is capable of handling vast dimensional information sets. Three features are introduced to permit reliable estimation of the model. First, we assume that the reduced-form errors in the VAR feature a factor stochastic volatility structure, allowing for conditional equation-by-equation estimation. Second, we apply recently developed global–local shrinkage priors to the VAR coefficients to cure the curse of dimensionality. Third, we utilize recent innovations to sample efficiently from high-dimensional multivariate Gaussian distributions. This makes simulation-based fully Bayesian inference feasible when the dimensionality is large but the time series length is moderate. We demonstrate the merits of our approach in an extensive simulation study and apply the model to US macroeconomic data to evaluate its forecasting capabilities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号