首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We observe that daily highs and lows of stock prices do not diverge over time and, hence, adopt the cointegration concept and the related vector error correction model (VECM) to model the daily high, the daily low, and the associated daily range data. The in‐sample results attest to the importance of incorporating high–low interactions in modeling the range variable. In evaluating the out‐of‐sample forecast performance using both mean‐squared forecast error and direction of change criteria, it is found that the VECM‐based low and high forecasts offer some advantages over alternative forecasts. The VECM‐based range forecasts, on the other hand, do not always dominate—the forecast rankings depend on the choice of evaluation criterion and the variables being forecast. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper we adopt a principal components analysis (PCA) to reduce the dimensionality of the term structure and employ autoregressive (AR) models to forecast principal components which, in turn, are used to forecast swap rates. Arguing in favour of structural variation, we propose data‐driven, adaptive model selection strategies based on the PCA/AR model. To evaluate ex ante forecasting performance for particular rates, distinct forecast features, such as mean squared errors, directional accuracy and directional forecast value, are considered. It turns out that, relative to benchmark models, the adaptive approach offers additional forecast accuracy in terms of directional accuracy and directional forecast value. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
Although both direct multi‐step‐ahead forecasting and iterated one‐step‐ahead forecasting are two popular methods for predicting future values of a time series, it is not clear that the direct method is superior in practice, even though from a theoretical perspective it has lower mean squared error (MSE). A given model can be fitted according to either a multi‐step or a one‐step forecast error criterion, and we show here that discrepancies in performance between direct and iterative forecasting arise chiefly from the method of fitting, and is dictated by the nuances of the model's misspecification. We derive new formulas for quantifying iterative forecast MSE, and present a new approach for assessing asymptotic forecast MSE. Finally, the direct and iterative methods are compared on a retail series, which illustrates the strengths and weaknesses of each approach. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

5.
This study establishes a benchmark for short‐term salmon price forecasting. The weekly spot price of Norwegian farmed Atlantic salmon is predicted 1–5 weeks ahead using data from 2007 to 2014. Sixteen alternative forecasting methods are considered, ranging from classical time series models to customized machine learning techniques to salmon futures prices. The best predictions are delivered by k‐nearest neighbors method for 1 week ahead; vector error correction model estimated using elastic net regularization for 2 and 3 weeks ahead; and futures prices for 4 and 5 weeks ahead. While the nominal gains in forecast accuracy over a naïve benchmark are small, the economic value of the forecasts is considerable. Using a simple trading strategy for timing the sales based on price forecasts could increase the net profit of a salmon farmer by around 7%.  相似文献   

6.
This paper discusses the forecasting performance of alternative factor models based on a large panel of quarterly time series for the German economy. One model extracts factors by static principal components analysis; the second model is based on dynamic principal components obtained using frequency domain methods; the third model is based on subspace algorithms for state‐space models. Out‐of‐sample forecasts show that the forecast errors of the factor models are on average smaller than the errors of a simple autoregressive benchmark model. Among the factor models, the dynamic principal component model and the subspace factor model outperform the static factor model in most cases in terms of mean‐squared forecast error. However, the forecast performance depends crucially on the choice of appropriate information criteria for the auxiliary parameters of the models. In the case of misspecification, rankings of forecast performance can change severely. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

7.
In examining stochastic models for commodity prices, central questions often revolve around time‐varying trend, stochastic convenience yield and volatility, and mean reversion. This paper seeks to assess and compare alternative approaches to modelling these effects, with focus on forecast performance. Three specifications are considered: (i) random‐walk models with GARCH and normal or Student‐t innovations; (ii) Poisson‐based jump‐diffusion models with GARCH and normal or Student‐t innovations; and (iii) mean‐reverting models that allow for uncertainty in equilibrium price. Our empirical application makes use of aluminium spot and futures price series at daily and weekly frequencies. Results show: (i) models with stochastic convenience yield outperform all other competing models, and for all forecast horizons; (ii) the use of futures prices does not always yield lower forecast error values compared to the use of spot prices; and (iii) within the class of (G)ARCH random‐walk models, no model uniformly dominates the other. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
This paper discusses the use of preliminary data in econometric forecasting. The standard practice is to ignore the distinction between preliminary and final data, the forecasts that do so here being termed naïve forecasts. It is shown that in dynamic models a multistep‐ahead naïve forecast can achieve a lower mean square error than a single‐step‐ahead one, as it is less affected by the measurement noise embedded in the preliminary observations. The minimum mean square error forecasts are obtained by optimally combining the information provided by the model and the new information contained in the preliminary data, which can be done within the state space framework as suggested in numerous papers. Here two simple, in general suboptimal, methods of combining the two sources of information are considered: modifying the forecast initial conditions by means of standard regressions and using intercept corrections. The issues are explored using Italian national accounts data and the Bank of Italy Quarterly Econometric Model. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

9.
This paper examines the relationship between stock prices and commodity prices and whether this can be used to forecast stock returns. As both prices are linked to expected future economic performance they should exhibit a long‐run relationship. Moreover, changes in sentiment towards commodity investing may affect the nature of the response to disequilibrium. Results support cointegration between stock and commodity prices, while Bai–Perron tests identify breaks in the forecast regression. Forecasts are computed using a standard fixed (static) in‐sample/out‐of‐sample approach and by both recursive and rolling regressions, which incorporate the effects of changing forecast parameter values. A range of model specifications and forecast metrics are used. The historical mean model outperforms the forecast models in both the static and recursive approaches. However, in the rolling forecasts, those models that incorporate information from the long‐run stock price/commodity price relationship outperform both the historical mean and other forecast models. Of note, the historical mean still performs relatively well compared to standard forecast models that include the dividend yield and short‐term interest rates but not the stock/commodity price ratio. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
Financial market time series exhibit high degrees of non‐linear variability, and frequently have fractal properties. When the fractal dimension of a time series is non‐integer, this is associated with two features: (1) inhomogeneity—extreme fluctuations at irregular intervals, and (2) scaling symmetries—proportionality relationships between fluctuations over different separation distances. In multivariate systems such as financial markets, fractality is stochastic rather than deterministic, and generally originates as a result of multiplicative interactions. Volatility diffusion models with multiple stochastic factors can generate fractal structures. In some cases, such as exchange rates, the underlying structural equation also gives rise to fractality. Fractal principles can be used to develop forecasting algorithms. The forecasting method that yields the best results here is the state transition‐fitted residual scale ratio (ST‐FRSR) model. A state transition model is used to predict the conditional probability of extreme events. Ratios of rates of change at proximate separation distances are used to parameterize the scaling symmetries. Forecasting experiments are run using intraday exchange rate futures contracts measured at 15‐minute intervals. The overall forecast error is reduced on average by up to 7% and in one instance by nearly a quarter. However, the forecast error during the outlying events is reduced by 39% to 57%. The ST‐FRSR reduces the predictive error primarily by capturing extreme fluctuations more accurately. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
This paper proposes an adjustment of linear autoregressive conditional mean forecasts that exploits the predictive content of uncorrelated model residuals. The adjustment is motivated by non‐Gaussian characteristics of model residuals, and implemented in a semiparametric fashion by means of conditional moments of simulated bivariate distributions. A pseudo ex ante forecasting comparison is conducted for a set of 494 macroeconomic time series recently collected by Dees et al. (Journal of Applied Econometrics 2007; 22: 1–38). In total, 10,374 time series realizations are contrasted against competing short‐, medium‐ and longer‐term purely autoregressive and adjusted predictors. With regard to all forecast horizons, the adjusted predictions consistently outperform conditionally Gaussian forecasts according to cross‐sectional mean group evaluation of absolute forecast errors and directional accuracy. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
This paper estimates, using stochastic simulation and a multi‐country macroeconometric model, the fraction of the forecast error variance of output changes and the fraction of the forecast error variance of inflation that are due to unpredictable asset price changes. The results suggest that between about 25% and 37% of the forecast error variance of output growth over eight quarters is due to asset price changes and between about 33% and 60% of the forecast error variance of inflation over eight quarters is due to asset price changes. These estimates provide limits to the accuracy that can be expected from macroeconomic forecasting. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

13.
Most non‐linear techniques give good in‐sample fits to exchange rate data but are usually outperformed by random walks or random walks with drift when used for out‐of‐sample forecasting. In the case of regime‐switching models it is possible to understand why forecasts based on the true model can have higher mean squared error than those of a random walk or random walk with drift. In this paper we provide some analytical results for the case of a simple switching model, the segmented trend model. It requires only a small misclassification, when forecasting which regime the world will be in, to lose any advantage from knowing the correct model specification. To illustrate this we discuss some results for the DM/dollar exchange rate. We conjecture that the forecasting result is more general and describes limitations to the use of switching models for forecasting. This result has two implications. First, it questions the leading role of the random walk hypothesis for the spot exchange rate. Second, it suggests that the mean square error is not an appropriate way to evaluate forecast performance for non‐linear models. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

14.
We extend the analysis of Christoffersen and Diebold (1998) on long‐run forecasting in cointegrated systems to multicointegrated systems. For the forecast evaluation we consider several loss functions, each of which has a particular interpretation in the context of stock‐flow models where multicointegration typically occurs. A loss function based on a standard mean square forecast error (MSFE) criterion focuses on the forecast errors of the flow variables alone. Likewise, a loss function based on the triangular representation of cointegrated systems (suggested by Christoffersen and Diebold) considers forecast errors associated with changes in both stock (modelled through the cointegrating restrictions) and flow variables. We suggest a new loss function based on the triangular representation of multicointegrated systems which further penalizes deviations from the long‐run relationship between the levels of stock and flow variables as well as changes in the flow variables. Among other things, we show that if one is concerned with all possible long‐run relations between stock and flow variables, this new loss function entails high and increasing forecasting gains compared to both the standard MSFE criterion and Christoffersen and Diebold's criterion. This paper demonstrates the importance of carefully selecting loss functions in forecast evaluation of models involving stock and flow variables. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

15.
This study investigates whether human judgement can be of value to users of industrial learning curves, either alone or in conjunction with statistical models. In a laboratory setting, it compares the forecast accuracy of a statistical model and judgemental forecasts, contingent on three factors: the amount of data available prior to forecasting, the forecasting horizon, and the availability of a decision aid (projections from a fitted learning curve). The results indicate that human judgement was better than the curve forecasts overall. Despite their lack of field experience with learning curve use, 52 of the 79 subjects outperformed the curve on the set of 120 forecasts, based on mean absolute percentage error. Human performance was statistically superior to the model when few data points were available and when forecasting further into the future. These results indicate substantial potential for human judgement to improve predictive accuracy in the industrial learning‐curve context. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

16.
This study investigates the forecasting performance of the GARCH(1,1) model by adding an effective covariate. Based on the assumption that many volatility predictors are available to help forecast the volatility of a target variable, this study shows how to construct a covariate from these predictors and plug it into the GARCH(1,1) model. This study presents a method of building a covariate such that the covariate contains the maximum possible amount of predictor information of the predictors for forecasting volatility. The loading of the covariate constructed by the proposed method is simply the eigenvector of a matrix. The proposed method enjoys the advantages of easy implementation and interpretation. Simulations and empirical analysis verify that the proposed method performs better than other methods for forecasting the volatility, and the results are quite robust to model misspecification. Specifically, the proposed method reduces the mean square error of the GARCH(1,1) model by 30% for forecasting the volatility of S&P 500 Index. The proposed method is also useful in improving the volatility forecasting of several GARCH‐family models and for forecasting the value‐at‐risk. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

17.
We evaluate forecasting models of US business fixed investment spending growth over the recent 1995:1–2004:2 out‐of‐sample period. The forecasting models are based on the conventional Accelerator, Neoclassical, Average Q, and Cash‐Flow models of investment spending, as well as real stock prices and excess stock return predictors. The real stock price model typically generates the most accurate forecasts, and forecast‐encompassing tests indicate that this model contains most of the information useful for forecasting investment spending growth relative to the other models at longer horizons. In a robustness check, we also evaluate the forecasting performance of the models over two alternative out‐of‐sample periods: 1975:1–1984:4 and 1985:1–1994:4. A number of different models produce the most accurate forecasts over these alternative out‐of‐sample periods, indicating that while the real stock price model appears particularly useful for forecasting the recent behavior of investment spending growth, it may not continue to perform well in future periods. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

18.
Upon the evidence that infinite‐order vector autoregression setting is more realistic in time series models, we propose new model selection procedures for producing efficient multistep forecasts. They consist of order selection criteria involving the sample analog of the asymptotic approximation of the h‐step‐ahead forecast mean squared error matrix, where h is the forecast horizon. These criteria are minimized over a truncation order nT under the assumption that an infinite‐order vector autoregression can be approximated, under suitable conditions, with a sequence of truncated models, where nT is increasing with sample size. Using finite‐order vector autoregressive models with various persistent levels and realistic sample sizes, Monte Carlo simulations show that, overall, our criteria outperform conventional competitors. Specifically, they tend to yield better small‐sample distribution of the lag‐order estimates around the true value, while estimating it with relatively satisfactory probabilities. They also produce more efficient multistep (and even stepwise) forecasts since they yield the lowest h‐step‐ahead forecast mean squared errors for the individual components of the holding pseudo‐data to forecast. Thus estimating the actual autoregressive order as well as the best forecasting model can be achieved with the same selection procedure. Such results stand in sharp contrast to the belief that parsimony is a virtue in itself, and state that the relative accuracy of strongly consistent criteria such as the Schwarz information criterion, as claimed in the literature, is overstated. Our criteria are new tools extending those previously existing in the literature and hence can suitably be used for various practical situations when necessary. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

19.
A Bayesian structural model with two components is proposed to forecast the occurrence of algal blooms, multivariate mean‐reverting diffusion process (MMRD), and a binary probit model with latent Markov regime‐switching process (BPMRS). The model has three features: (a) forecast of the occurrence probability of algal bloom is directly based on oceanographic parameters, not the forecasting of special indicators in traditional approaches, such as phytoplankton or chlorophyll‐a; (b) augmentation of daily oceanographic parameters from the data collected every 2 weeks is based on MMRD. The proposed method solves the problem of unavailability of daily oceanographic parameters in practice; (c) BPMRS captures the unobservable factors which affect algal bloom occurrence and therefore improve forecast accuracy. We use panel data collected in Tolo Harbour, Hong Kong, to validate the model. The model demonstrates good forecasting for out‐of‐sample rolling forecasts, especially for algal bloom appearing for a longer period, which severely damages fisheries and the marine environment.  相似文献   

20.
Using the 'standard' approach to forecasting in the vector autoregressive moving average model, we establish basic general results on exact finite sample forecasts and their mean squared error matrices. Comparison between the exact and conditional methods of initiating the finite sample forecast calculations is presented, and a few illustrative cases are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号