首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We explore the benefits of forecast combinations based on forecast‐encompassing tests compared to simple averages and to Bates–Granger combinations. We also consider a new combination algorithm that fuses test‐based and Bates–Granger weighting. For a realistic simulation design, we generate multivariate time series samples from a macroeconomic DSGE‐VAR (dynamic stochastic general equilibrium–vector autoregressive) model. Results generally support Bates–Granger over uniform weighting, whereas benefits of test‐based weights depend on the sample size and on the prediction horizon. In a corresponding application to real‐world data, simple averaging performs best. Uniform averages may be the weighting scheme that is most robust to empirically observed irregularities. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
Point process models, such as Hawkes and recursive models, have recently been shown to offer improved accuracy over more traditional compartmental models for the purposes of modeling and forecasting the spread of disease epidemics. To explicitly test the performance of these two models in a real-world and ongoing epidemic, we compared the fit of Hawkes and recursive models to outbreak data on Ebola virus disease (EVD) in the Democratic Republic of the Congo in 2018–2020. The models were estimated, and the forecasts were produced, time-stamped, and stored in real time, so that their prospective value can be assessed and to guard against potential overfitting. The fit of the two models was similar, with both models resulting in much smaller errors in the beginning and waning phases of the epidemic and with slightly smaller error sizes on average for the Hawkes model compared with the recursive model. Our results suggest that both Hawkes and recursive point process models can be used in near real time during the course of an epidemic to help predict future cases and inform management and mitigation strategies.  相似文献   

3.
A similarity‐based classification model is proposed whereby densities of positive and negative returns in a delay‐embedded input space are estimated from a graphical representation of the data using an eigenvector centrality measure, and subsequently combined under Bayes' theorem to predict the probability of upward/downward movements. Application to directional forecasting of the daily close price of the Dow Jones Industrial Average over a 20‐year out‐of‐sample period yields performance superior to random walk and logistic regression models, and on a par with that of multilayer perceptrons. A feature of the classifier is that it is parameter free, parameters entering the model only via the measure used to determine pairwise similarity between data points. This allows intuitions about the nature of time series to be elegantly integrated into the model. The recursive nature of eigenvector centrality makes it better able to deal with sparsely populated input spaces than conventional approaches based on density estimation. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

4.
This paper investigates whether and to what extent multiple encompassing tests may help determine weights for forecast averaging in a standard vector autoregressive setting. To this end we consider a new test‐based procedure, which assigns non‐zero weights to candidate models that add information not covered by other models. The potential benefits of this procedure are explored in extensive Monte Carlo simulations using realistic designs that are adapted to UK and to French macroeconomic data, to which trivariate vector autoregressions (VAR) are fitted. Thus simulations rely on potential data‐generating mechanisms for macroeconomic data rather than on simple but artificial designs. We run two types of forecast ‘competitions’. In the first one, one of the model classes is the trivariate VAR, such that it contains the generating mechanism. In the second specification, none of the competing models contains the true structure. The simulation results show that the performance of test‐based averaging is comparable to uniform weighting of individual models. In one of our role model economies, test‐based averaging achieves advantages in small samples. In larger samples, pure prediction models outperform forecast averages. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

5.
The motivation for this paper was the introduction of novel short‐term models to trade the FTSE 100 and DAX 30 exchange‐traded funds (ETF) indices. There are major contributions in this paper which include the introduction of an input selection criterion when utilizing an expansive universe of inputs, a hybrid combination of partial swarm optimizer (PSO) with radial basis function (RBF) neural networks, the application of a PSO algorithm to a traditional autoregressive moving model (ARMA), the application of a PSO algorithm to a higher‐order neural network and, finally, the introduction of a multi‐objective algorithm to optimize statistical and trading performance when trading an index. All the machine learning‐based methodologies and the conventional models are adapted and optimized to model the index. A PSO algorithm is used to optimize the weights in a traditional RBF neural network, in a higher‐order neural network (HONN) and the AR and MA terms of an ARMA model. In terms of checking the statistical and empirical accuracy of the novel models, we benchmark them with a traditional HONN, with an ARMA, with a moving average convergence/divergence model (MACD) and with a naïve strategy. More specifically, the trading and statistical performance of all models is investigated in a forecast simulation of the FTSE 100 and DAX 30 ETF time series over the period January 2004 to December 2015 using the last 3 years for out‐of‐sample testing. Finally, the empirical and statistical results indicate that the PSO‐RBF model outperforms all other examined models in terms of trading accuracy and profitability, even with mixed inputs and with only autoregressive inputs. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
The period of extraordinary volatility in euro area headline inflation starting in 2007 raised the question whether forecast combination methods can be used to hedge against bad forecast performance of single models during such periods and provide more robust forecasts. We investigate this issue for forecasts from a range of short‐term forecasting models. Our analysis shows that there is considerable variation of the relative performance of the different models over time. To take that into account we suggest employing performance‐based forecast combination methods—in particular, one with more weight on the recent forecast performance. We compare such an approach with equal forecast combination that has been found to outperform more sophisticated forecast combination methods in the past, and investigate whether it can improve forecast accuracy over the single best model. The time‐varying weights assign weights to the economic interpretations of the forecast stemming from different models. We also include a number of benchmark models in our analysis. The combination methods are evaluated for HICP headline inflation and HICP excluding food and energy. We investigate how forecast accuracy of the combination methods differs between pre‐crisis times, the period after the global financial crisis and the full evaluation period, including the global financial crisis with its extraordinary volatility in inflation. Overall, we find that forecast combination helps hedge against bad forecast performance and that performance‐based weighting outperforms simple averaging. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   

7.
This paper examines the relationship between stock prices and commodity prices and whether this can be used to forecast stock returns. As both prices are linked to expected future economic performance they should exhibit a long‐run relationship. Moreover, changes in sentiment towards commodity investing may affect the nature of the response to disequilibrium. Results support cointegration between stock and commodity prices, while Bai–Perron tests identify breaks in the forecast regression. Forecasts are computed using a standard fixed (static) in‐sample/out‐of‐sample approach and by both recursive and rolling regressions, which incorporate the effects of changing forecast parameter values. A range of model specifications and forecast metrics are used. The historical mean model outperforms the forecast models in both the static and recursive approaches. However, in the rolling forecasts, those models that incorporate information from the long‐run stock price/commodity price relationship outperform both the historical mean and other forecast models. Of note, the historical mean still performs relatively well compared to standard forecast models that include the dividend yield and short‐term interest rates but not the stock/commodity price ratio. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
We investigate the realized volatility forecast of stock indices under the structural breaks. We utilize a pure multiple mean break model to identify the possibility of structural breaks in the daily realized volatility series by employing the intraday high‐frequency data of the Shanghai Stock Exchange Composite Index and the five sectoral stock indices in Chinese stock markets for the period 4 January 2000 to 30 December 2011. We then conduct both in‐sample tests and out‐of‐sample forecasts to examine the effects of structural breaks on the performance of ARFIMAX‐FIGARCH models for the realized volatility forecast by utilizing a variety of estimation window sizes designed to accommodate potential structural breaks. The results of the in‐sample tests show that there are multiple breaks in all realized volatility series. The results of the out‐of‐sample point forecasts indicate that the combination forecasts with time‐varying weights across individual forecast models estimated with different estimation windows perform well. In particular, nonlinear combination forecasts with the weights chosen based on a non‐parametric kernel regression and linear combination forecasts with the weights chosen based on the non‐negative restricted least squares and Schwarz information criterion appear to be the most accurate methods in point forecasting for realized volatility under structural breaks. We also conduct an interval forecast of the realized volatility for the combination approaches, and find that the interval forecast for nonlinear combination approaches with the weights chosen according to a non‐parametric kernel regression performs best among the competing models. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, we forecast stock returns using time‐varying parameter (TVP) models with parameters driven by economic conditions. An in‐sample specification test shows significant variation in the parameters. Out‐of‐sample results suggest that the TVP models outperform their constant coefficient counterparts. We also find significant return predictability from both statistical and economic perspectives with the application of TVP models. The out‐of‐sample R2 of an equal‐weighted combination of TVP models is as high as 2.672%, and the gains in the certainty equivalent return are 214.7 basis points. Further analysis indicates that the improvement in predictability comes from the use of information on economic conditions rather than simply from allowing the coefficients to vary with time.  相似文献   

10.
We utilize mixed‐frequency factor‐MIDAS models for the purpose of carrying out backcasting, nowcasting, and forecasting experiments using real‐time data. We also introduce a new real‐time Korean GDP dataset, which is the focus of our experiments. The methodology that we utilize involves first estimating common latent factors (i.e., diffusion indices) from 190 monthly macroeconomic and financial series using various estimation strategies. These factors are then included, along with standard variables measured at multiple different frequencies, in various factor‐MIDAS prediction models. Our key empirical findings as follows. (i) When using real‐time data, factor‐MIDAS prediction models outperform various linear benchmark models. Interestingly, the “MSFE‐best” MIDAS models contain no autoregressive (AR) lag terms when backcasting and nowcasting. AR terms only begin to play a role in “true” forecasting contexts. (ii) Models that utilize only one or two factors are “MSFE‐best” at all forecasting horizons, but not at any backcasting and nowcasting horizons. In these latter contexts, much more heavily parametrized models with many factors are preferred. (iii) Real‐time data are crucial for forecasting Korean gross domestic product, and the use of “first available” versus “most recent” data “strongly” affects model selection and performance. (iv) Recursively estimated models are almost always “MSFE‐best,” and models estimated using autoregressive interpolation dominate those estimated using other interpolation methods. (v) Factors estimated using recursive principal component estimation methods have more predictive content than those estimated using a variety of other (more sophisticated) approaches. This result is particularly prevalent for our “MSFE‐best” factor‐MIDAS models, across virtually all forecast horizons, estimation schemes, and data vintages that are analyzed.  相似文献   

11.
This paper applies a tightly parameterized pattern recognition algorithm, previously applied to earthquake prediction, to the problem of predicting recessions. Monthly data from 1962 to 1996 on six leading and coincident economic indicators for the USA are used. In the full sample, the model performs better than benchmark linear and non‐linear models with the same number of parameters. Subsample and recursive analysis indicates that the algorithm is stable and produces reasonably accurate forecasts even when estimated using a small number of recessions. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

12.
This paper shows that a constrained autoregressive model that assigns linearly decreasing weights to past observations of a stationary time series has important links to the variance ratio methodology and trend stationary model. It is demonstrated that the proposed autoregressive model is asymptotically related to the variance ratio through the weighting schedules that these two tools use. It is also demonstrated that under a trend stationary time series process the proposed autoregressive model approaches a trend stationary model when the memory of the autoregressive model is increased. These links create a theoretical foundation for tests that confront the random walk model simultaneously against a trend stationary and a variety of short‐ and long‐memory autoregressive alternatives. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

13.
This paper describes the application of space-time ARMA modelling to demand-related data from eight hotels from a single hotel chain in a large US city. Important spatial characteristics of the space-time process are incorporated into the model using a simple weighting matrix based on driving distances between the hotels. Using a hold-out sample, the forecasting performance of this space-time approach was found to be superior to eight separate univariate ARMA models.  相似文献   

14.
The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that exploit volatility persistence to emphasise certain losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta.  相似文献   

15.
In this paper, we introduce the functional coefficient to heterogeneous autoregressive realized volatility (HAR‐RV) models to make the parameters change over time. A nonparametric statistic is developed to perform a specification test. The simulation results show that our test displays reliable size and good power. Using the proposed test, we find a significant time variation property of coefficients to the HAR‐RV models. Time‐varying parameter (TVP) models can significantly outperform their constant‐coefficient counterparts for longer forecasting horizons. The predictive ability of TVP models can be improved by accounting for VIX information. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

16.
This paper compares the structure of three models for estimating future growth in a time series. It is shown that a regression model gives minimum weight to the last observed growth and maximum weight to the observed growth in the middle of the sample period. A first-order integrated ARIMA model, or 1(1) model, gives uniform weights to all observed growths. Finally, a second-order integrated ARIMA model gives maximum weights to the last observed growth and minimum weights to the observed growths at the beginning of the sample period. The forecasting performance of these models is compared using annual output growth rates for seven countries.  相似文献   

17.
There exists theoretical and empirical evidence on the efficiency and robustness of Non-negativity Restricted Least Squares combinations of forecasts. However, the computational complexity of the method hinders its widespread use in practice. We examine various optimizing and heuristic computational algorithms for estimating NRLS combination models and provide certain CPU-time reducing implementations. We empirically compare the combination weights identified by the alternative algorithms and their computational demands based on a total of more than 66,000 models estimated to combine the forecasts of 37 firm-specific accounting earnings series. The ex ante prediction accuracies of combined forecasts from the optimizing versus heuristic algorithms are compared. The effects of fit sample size, model specification, multicollinearity, correlations of forecast errors, and series and forecast variances on the relative accuracy of the optimizing versus heuristic algorithms are analysed. The results reveal that, in general, the computationally simple heuristic algorithms perform as well as the optimizing algorithms. No generalizable conclusions could be reached, however, about which algorithm should be used based on series and forecast characteristics. © 1997 John Wiley & Sons, Ltd.  相似文献   

18.
首先构建了企业创新的框架体系,将企业创新分为自主研发与协同创新,然后基于高技术产业科技投入产出数据,采用面板数据模型和面板门槛模型分析了自主研发与协同创新弹性系数及其门槛特点,建立了自主研发自身门槛模型、协同创新自身门槛模型、创新产出对自主研发门槛模型、创新产出对协同创新门槛模型、科技人力资源对自主研发门槛模型。研究结果表明,自主研发的绩效较高,具有规模经济效应;协同创新绩效总体较低,较低水平地区绩效较高;科技人力资源绩效总体不高,科技人力资源投入较低地区自主研发绩效较高。  相似文献   

19.
This paper develops a New‐Keynesian Dynamic Stochastic General Equilibrium (NKDSGE) model for forecasting the growth rate of output, inflation, and the nominal short‐term interest rate (91 days Treasury Bill rate) for the South African economy. The model is estimated via maximum likelihood technique for quarterly data over the period of 1970:1–2000:4. Based on a recursive estimation using the Kalman filter algorithm, out‐of‐sample forecasts from the NKDSGE model are compared with forecasts generated from the classical and Bayesian variants of vector autoregression (VAR) models for the period 2001:1–2006:4. The results indicate that in terms of out‐of‐sample forecasting, the NKDSGE model outperforms both the classical and Bayesian VARs for inflation, but not for output growth and nominal short‐term interest rate. However, differences in RMSEs are not significant across the models. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
In the present study we report on the development and test results of a Cartesian ARIMA Search Algorithm, designed for automatic generation of univariate models for time series data within specified parameter intervals of the identification and estimation stages. Model retention is determined within a preselected set of statistics. By interpreting these statistics as dimensions of the constructed criterion space, we obtain a subset of non-dominated models according to the rule of maximum dispersion over the efficient set. The CARIMA algorithm allows free specification of the number of criteria used in the runs. The algorithm was tested with both simulated and real economic data. The results based on simulated data indicate that the precision of the CARIMA algorithm is lower for seasonal models and higher for non-seasonal ones, thus suggesting an inverse relationship between algorithm performance and model complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号