首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The availability of numerous modeling approaches for volatility forecasting leads to model uncertainty for both researchers and practitioners. A large number of studies provide evidence in favor of combination methods for forecasting a variety of financial variables, but most of them are implemented on returns forecasting and evaluate their performance based solely on statistical evaluation criteria. In this paper, we combine various volatility forecasts based on different combination schemes and evaluate their performance in forecasting the volatility of the S&P 500 index. We use an exhaustive variety of combination methods to forecast volatility, ranging from simple techniques to time-varying techniques based on the past performance of the single models and regression techniques. We then evaluate the forecasting performance of single and combination volatility forecasts based on both statistical and economic loss functions. The empirical analysis in this paper yields an important conclusion. Although combination forecasts based on more complex methods perform better than the simple combinations and single models, there is no dominant combination technique that outperforms the rest in both statistical and economic terms.  相似文献   

2.
The TFT‐LCD (thin‐film transistor–liquid crystal display) industry is one of the key global industries with products that have high clock speed. In this research, the LCD monitor market is considered for an empirical study on hierarchical forecasting (HF). The proposed HF methodology consists of five steps. First, the three hierarchical levels of the LCD monitor market are identified. Second, several exogenously driven factors that significantly affect the demand for LCD monitors are identified at each level of product hierarchy. Third, the three forecasting techniques—regression analysis, transfer function, and simultaneous equations model—are combined to forecast future demand at each hierarchical level. Fourth, various forecasting approaches and disaggregating proportion methods are adopted to obtain consistent demand forecasts at each hierarchical level. Finally, the forecast errors with different forecasting approaches are assessed in order to determine the best forecasting level and the best forecasting approach. The findings show that the best forecast results can be obtained by using the middle‐out forecasting approach. These results could guide LCD manufacturers and brand owners on ways to forecast future market demands. Copyright 2008 John Wiley & Sons, Ltd.  相似文献   

3.
Many publications on tourism forecasting have appeared during the past twenty years. The purpose of this article is to organize and summarize that scattered literature. General conclusions are also drawn from the studies to help those wishing to develop tourism forecasts of their own. The forecasting techniques discussed include time series models, econometric causal models, the gravity model and expert-opinion techniques. The major conclusions are that time series models are the simplest and least costly (and therefore most appropriate for practitioners); the gravity model is best suited to handle international tourism flows (and will be most useful to governments and tourism agencies); and expert-opinion methods are useful when data are unavailable. Further research is needed on the use of economic indicators in tourism forecasting, on the development of attractivity and emissiveness indexes for use in gravity and econometric models and on empirical comparisons among the different methods.  相似文献   

4.
Hierarchical time series arise in various fields such as manufacturing and services when the products or services can be hierarchically structured. “Top-down” and “bottom-up” forecasting approaches are often used for forecasting such hierarchical time series. In this paper, we develop a new hybrid approach (HA) with step-size aggregation for hierarchical time series forecasting. The new approach is a weighted average of the two classical approaches with the weights being optimally chosen for all the series at each level of the hierarchy to minimize the variance of the forecast errors. The independent selection of weights for all the series at each level of the hierarchy makes the HA inconsistent while aggregating suitably across the hierarchy. To address this issue, we introduce a step-size aggregate factor that represents the relationship between forecasts of the two consecutive levels of the hierarchy. The key advantage of the proposed HA is that it captures the structure of the hierarchy inherently due to the combination of the hierarchical approaches instead of independent forecasts of all the series at each level of the hierarchy. We demonstrate the performance of the new approach by applying it to the monthly data of ‘Industrial’ category of M3-Competition as well as on Pakistan energy consumption data.  相似文献   

5.
Forecast combination based on a model selection approach is discussed and evaluated. In addition, a combination approach based on ex ante predictive ability is outlined. The model selection approach which we examine is based on the use of Schwarz (SIC) or the Akaike (AIC) Information Criteria. Monte Carlo experiments based on combination forecasts constructed using possibly (misspecified) models suggest that the SIC offers a potentially useful combination approach, and that further investigation is warranted. For example, combination forecasts from a simple averaging approach MSE‐dominate SIC combination forecasts less than 25% of the time in most cases, while other ‘standard’ combination approaches fare even worse. Alternative combination approaches are also compared by conducting forecasting experiments using nine US macroeconomic variables. In particular, artificial neural networks (ANN), linear models, and professional forecasts are used to form real‐time forecasts of the variables, and it is shown via a series of experiments that SIC, t‐statistic, and averaging combination approaches dominate various other combination approaches. An additional finding is that while ANN models may not MSE‐dominate simpler linear models, combinations of forecasts from these two models outperform either individual forecast, for a subset of the economic variables examined. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

6.
In this paper we aim to improve existing empirical exchange rate models by accounting for uncertainty with respect to the underlying structural representation. Within a flexible Bayesian framework, our modeling approach assumes that different regimes are characterized by commonly used structural exchange rate models, with transitions across regimes being driven by a Markov process. We assume a time-varying transition probability matrix with transition probabilities depending on a measure of the monetary policy stance of the central bank at home and in the USA. We apply this model to a set of eight exchange rates against the US dollar. In a forecasting exercise, we show that model evidence varies over time, and a model approach that takes this empirical evidence seriously yields more accurate density forecasts for most currency pairs considered.  相似文献   

7.
The vector multiplicative error model (vector MEM) is capable of analyzing and forecasting multidimensional non‐negative valued processes. Usually its parameters are estimated by generalized method of moments (GMM) and maximum likelihood (ML) methods. However, the estimations could be heavily affected by outliers. To overcome this problem, in this paper an alternative approach, the weighted empirical likelihood (WEL) method, is proposed. This method uses moment conditions as constraints and the outliers are detected automatically by performing a k‐means clustering on Oja depth values of innovations. The performance of WEL is evaluated against those of GMM and ML methods through extensive simulations, in which three different kinds of additive outliers are considered. Moreover, the robustness of WEL is demonstrated by comparing the volatility forecasts of the three methods on 10‐minute returns of the S&P 500 index. The results from both the simulations and the S&P 500 volatility forecasts have shown preferences in using the WEL method. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

8.
Artificial neural network modelling has recently attracted much attention as a new technique for estimation and forecasting in economics and finance. The chief advantages of this new approach are that such models can usually find a solution for very complex problems, and that they are free from the assumption of linearity that is often adopted to make the traditional methods tractable. In this paper we compare the performance of Back‐Propagation Artificial Neural Network (BPN) models with the traditional econometric approaches to forecasting the inflation rate. Of the traditional econometric models we use a structural reduced‐form model, an ARIMA model, a vector autoregressive model, and a Bayesian vector autoregression model. We compare each econometric model with a hybrid BPN model which uses the same set of variables. Dynamic forecasts are compared for three different horizons: one, three and twelve months ahead. Root mean squared errors and mean absolute errors are used to compare quality of forecasts. The results show the hybrid BPN models are able to forecast as well as all the traditional econometric methods, and to outperform them in some cases. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

9.
Longevity risk has become one of the major risks facing the insurance and pensions markets globally. The trade in longevity risk is underpinned by accurate forecasting of mortality rates. Using techniques from macroeconomic forecasting we propose a dynamic factor model of mortality that fits and forecasts age‐specific mortality rates parsimoniously. We compare the forecasting quality of this model against the Lee–Carter model and its variants. Our results show the dynamic factor model generally provides superior forecasts when applied to international mortality data. We also show that existing multifactorial models have superior fit but their forecasting performance worsens as more factors are added. The dynamic factor approach used here can potentially be further improved upon by applying an appropriate stopping rule for the number of static and dynamic factors. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that exploit volatility persistence to emphasise certain losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta.  相似文献   

11.
This study addresses problems concerning the forecasting of net migration in the preparation of population forecasts. "As the width of forecast intervals for migration in single years differs strongly from that of an interval for average migration during the forecast period, it is important that the forecaster indicates which type of interval is presented. A comparison of forecast intervals for net migration obtained from an ARIMA model to intervals in official Dutch national population forecasts shows that the uncertainty on migration has been underestimated in past official forecasts."  相似文献   

12.
‘Bayesian forecasting’ is a time series method of forecasting which (in the United Kingdom) has become synonymous with the state space formulation of Harrison and Stevens (1976). The approach is distinct from other time series methods in that it envisages changes in model structure. A disjoint class of models is chosen to encompass the changes. Each data point is retrospectively evaluated (using Bayes theorem) to judge which of the models held. Forecasts are then derived conditional on an assumed model holding true. The final forecasts are weighted sums of these conditional forecasts. Few empirical evaluations have been carried out. This paper reports a large scale comparison of time series forecasting methods including the Bayesian. The approach is two fold: a simulation study to examine parameter sensitivity and an empirical study which contrasts Bayesian with other time series methods.  相似文献   

13.
In time-series analysis, a model is rarely pre-specified but rather is typically formulated in an iterative, interactive way using the given time-series data. Unfortunately the properties of the fitted model, and the forecasts from it, are generally calculated as if the model were known in the first place. This is theoretically incorrect, as least squares theory, for example, does not apply when the same data are used to formulates and fit a model. Ignoring prior model selection leads to biases, not only in estimates of model parameters but also in the subsequent construction of prediction intervals. The latter are typically too narrow, partly because they do not allow for model uncertainty. Empirical results also suggest that more complicated models tend to give a better fit but poorer ex-ante forecasts. The reasons behind these phenomena are reviewed. When comparing different forecasting models, the BIC is preferred to the AIC for identifying a model on the basis of within-sample fit, but out-of-sample forecasting accuracy provides the real test. Alternative approaches to forecasting, which avoid conditioning on a single model, include Bayesian model averaging and using a forecasting method which is not model-based but which is designed to be adaptable and robust.  相似文献   

14.
The paper proposes a simulation‐based approach to multistep probabilistic forecasting, applied for predicting the probability and duration of negative inflation. The essence of this approach is in counting runs simulated from a multivariate distribution representing the probabilistic forecasts, which enters the negative inflation regime. The marginal distributions of forecasts are estimated using the series of past forecast errors, and the joint distribution is obtained by a multivariate copula approach. This technique is applied for estimating the probability of negative inflation in China and its expected duration, with the marginal distributions computed by fitting weighted skew‐normal and two‐piece normal distributions to autoregressive moving average ex post forecast errors and using the multivariate Student t copula.  相似文献   

15.
When managers make revisions to sales forecasts initially generated by a rational quantitative model it is important that the particular forecasts selected for adjustment are those which would benefit most from the adjustment process (i.e. realize high errors). This study reports an empirical investigation on this issue, spanning six quarterly forecasting periods and incorporating forecasting data on over 850 products. The results show that the errors of the forecasts chosen for revision are, in general, higher than those which were not chosen. In addition, it is shown that managesrs tend to revise forecasts which are initially low, hence possibily introducing some degree of bias into the overall forecasts.  相似文献   

16.
This paper focuses on the effects of disaggregation on forecast accuracy for nonstationary time series using dynamic factor models. We compare the forecasts obtained directly from the aggregated series based on its univariate model with the aggregation of the forecasts obtained for each component of the aggregate. Within this framework (first obtain the forecasts for the component series and then aggregate the forecasts), we try two different approaches: (i) generate forecasts from the multivariate dynamic factor model and (ii) generate the forecasts from univariate models for each component of the aggregate. In this regard, we provide analytical conditions for the equality of forecasts. The results are applied to quarterly gross domestic product (GDP) data of several European countries of the euro area and to their aggregated GDP. This will be compared to the prediction obtained directly from modeling and forecasting the aggregate GDP of these European countries. In particular, we would like to check whether long‐run relationships between the levels of the components are useful for improving the forecasting accuracy of the aggregate growth rate. We will make forecasts at the country level and then pool them to obtain the forecast of the aggregate. The empirical analysis suggests that forecasts built by aggregating the country‐specific models are more accurate than forecasts constructed using the aggregated data. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

17.
Methods of time series forecasting are proposed which can be applied automatically. However, they are not rote formulae, since they are based on a flexible philosophy which can provide several models for consideration. In addition it provides diverse diagnostics for qualitatively and quantitatively estimating how well one can forecast a series. The models considered are called ARARMA models (or ARAR models) because the model fitted to a long memory time series (t) is based on sophisticated time series analysis of AR (or ARMA) schemes (short memory models) fitted to residuals Y(t) obtained by parsimonious‘best lag’non-stationary autoregression. Both long range and short range forecasts are provided by an ARARMA model Section 1 explains the philosophy of our approach to time series model identification. Sections 2 and 3 attempt to relate our approach to some standard approaches to forecasting; exponential smoothing methods are developed from the point of view of prediction theory (section 2) and extended (section 3). ARARMA models are introduced (section 4). Methods of ARARMA model fitting are outlined (sections 5,6). Since‘the proof of the pudding is in the eating’, the methods proposed are illustrated (section 7) using the classic example of international airline passengers.  相似文献   

18.
The judgemental revision of sales forecasts is an issue which is receiving increasing attention in the forecasting literature. This paper compares the performance of forecasts after revision by managers with that of the forecasts which were accepted by them without revision. The data set consists of sales forecasting data from an industrial company, spanning six quarterly periods and relating to some 900 individual products. The findings show that, in general, the improvements made by managers bring the forecast errors of revised forecasts more into line with non-revised forecasts, but the change is often marginal, and the best result is equivalence between revised and non-revised forecasts.  相似文献   

19.
This paper investigates the forecasting ability of four different GARCH models and the Kalman filter method. The four GARCH models applied are the bivariate GARCH, BEKK GARCH, GARCH-GJR and the GARCH-X model. The paper also compares the forecasting ability of the non-GARCH model: the Kalman method. Forecast errors based on 20 UK company daily stock return (based on estimated time-varying beta) forecasts are employed to evaluate out-of-sample forecasting ability of both GARCH models and Kalman method. Measures of forecast errors overwhelmingly support the Kalman filter approach. Among the GARCH models the GJR model appears to provide somewhat more accurate forecasts than the other bivariate GARCH models. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper we make an empirical investigation of the relationship between the consistency, coherence and validity of probability judgements in a real-world forecasting context. Our results indicate that these measures of the adequacy of an individual's probability assessments are not closely related as we anticipated. Twenty-nine of our thirty-six subjects were better calibrated in point probabilities than in odds and our subjects were, in general more coherent using point probabilities than odds forecasts. Contrary to our expectations we found very little difference in forecasting response and performance between simple and compound holistic forecasts. This result is evidence against the ‘divide-and-conquer’ rationale underlying most applications of normative decision theory. In addition, our recompositions of marginal and conditional assessments into compound forecasts were no better calibrated or resolved than their holistic counterparts. These findings convey two implications for forecasting. First, untrained judgemental forecasters should use point probabilities in preference to odds. Second, judgemental forecasts of complex compound probabilities may be as well assessed holistically as they are using methods of decomposition and recomposition. In addition, our study provides a paradigm for further studies of the relationship between consistency, coherence and validity in judgemental probability forecasting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号