首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We investigate the accuracy of capital investment predictors from a national business survey of South African manufacturing. Based on data available to correspondents at the time of survey completion, we propose variables that might inform the confidence that can be attached to their predictions. Having calibrated the survey predictors' directional accuracy, we model the probability of a correct directional prediction using logistic regression with the proposed variables. For point forecasting, we compare the accuracy of rescaled survey forecasts with time series benchmarks and some survey/time series hybrid models. In addition, using the same set of variables, we model the magnitude of survey prediction errors. Directional forecast tests showed that three out of four survey predictors have value but are biased and inefficient. For shorter horizons we found that survey forecasts, enhanced by time series data, significantly improved point forecasting accuracy. For longer horizons the survey predictors were at least as accurate as alternatives. The usefulness of the more accurate of the predictors examined is enhanced by auxiliary information, namely the probability of directional accuracy and the estimated error magnitude.  相似文献   

2.
We provide a comprehensive study of out‐of‐sample forecasts for the EUR/USD exchange rate based on multivariate macroeconomic models and forecast combinations. We use profit maximization measures based on directional accuracy and trading strategies in addition to standard loss minimization measures. When comparing predictive accuracy and profit measures, data snooping bias free tests are used. The results indicate that forecast combinations, in particular those based on principal components of forecasts, help to improve over benchmark trading strategies, although the excess return per unit of deviation is limited. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
This study compares the performance of two forecasting models of the 10‐year Treasury rate: a random walk (RW) model and an augmented‐autoregressive (A‐A) model which utilizes the information in the expected inflation rate. For 1993–2008, the RW and A‐A forecasts (with different lead times and forecast horizons) are generally unbiased and accurately predict directional change under symmetric loss. However, the A‐A forecasts outperform the RW, suggesting that the expected inflation rate (as a leading indicator) helps improve forecast accuracy. This finding is important since bond market efficiency implies that the RW forecasts are optimal and cannot be improved. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

4.
This paper discusses the forecasting performance of alternative factor models based on a large panel of quarterly time series for the German economy. One model extracts factors by static principal components analysis; the second model is based on dynamic principal components obtained using frequency domain methods; the third model is based on subspace algorithms for state‐space models. Out‐of‐sample forecasts show that the forecast errors of the factor models are on average smaller than the errors of a simple autoregressive benchmark model. Among the factor models, the dynamic principal component model and the subspace factor model outperform the static factor model in most cases in terms of mean‐squared forecast error. However, the forecast performance depends crucially on the choice of appropriate information criteria for the auxiliary parameters of the models. In the case of misspecification, rankings of forecast performance can change severely. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

5.
Most economic forecast evaluations dating back 20 years show that professional forecasters add little to the forecasts generated by the simplest of models. Using various types of forecast error criteria, these evaluations usually conclude that the professional forecasts are little better than the no-change or ARIM A type forecast. It is our contention that this conclusion is mistaken because the conventional error criteria may not capture why forecasts are ma& or how they are used. Using forecast directional accuracy, the criterion which has been found to be highly correlated with profits in an interest rate setting, we find that professional GNP forecasts dominate the cheaper alternatives. Moreover, there appears to be no systematic relationship between this preferred criterion and the error measures used in previous studies.  相似文献   

6.
In this paper we present results of a simulation study to assess and compare the accuracy of forecasting techniques for long‐memory processes in small sample sizes. We analyse differences between adaptive ARMA(1,1) L‐step forecasts, where the parameters are estimated by minimizing the sum of squares of L‐step forecast errors, and forecasts obtained by using long‐memory models. We compare widths of the forecast intervals for both methods, and discuss some computational issues associated with the ARMA(1,1) method. Our results illustrate the importance and usefulness of long‐memory models for multi‐step forecasting. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

7.
In this paper we propose and test a forecasting model on monthly and daily spot prices of five selected exchange rates. In doing so, we combine a novel smoothing technique (initially applied in signal processing) with a variable selection methodology and two regression estimation methodologies from the field of machine learning (ML). After the decomposition of the original exchange rate series using an ensemble empirical mode decomposition (EEMD) method into a smoothed and a fluctuation component, multivariate adaptive regression splines (MARS) are used to select the most appropriate variable set from a large set of explanatory variables that we collected. The selected variables are then fed into two distinctive support vector machines (SVR) models that produce one‐period‐ahead forecasts for the two components. Neural networks (NN) are also considered as an alternative to SVR. The sum of the two forecast components is the final forecast of the proposed scheme. We show that the above implementation exhibits a superior in‐sample and out‐of‐sample forecasting ability when compared to alternative forecasting models. The empirical results provide evidence against the efficient market hypothesis for the selected foreign exchange markets. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

8.
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

9.
This paper proposes to forecast foreign exchange rates by means of an error components‐seemingly unrelated nonlinear regression (EC‐SUNR) model and, simultaneously, explore the interrelationships among currencies from newly industrializing economies with those of highly industrialized countries. Based on the empirical results, we find that the EC‐SUNR model improves on the performance of forecasting foreign exchange rates in comparison with an intrinsically nonlinear dynamic speed of adjustment model that has been shown to outperform several other important models in the forecasting literature. We also find evidence showing that the foreign exchange markets of the newly industrializing countries are influenced by those of the highly industrialized countries and vice versa, and that such interrelationships affect the accuracy of currency forecasting. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

10.
Upon the evidence that infinite‐order vector autoregression setting is more realistic in time series models, we propose new model selection procedures for producing efficient multistep forecasts. They consist of order selection criteria involving the sample analog of the asymptotic approximation of the h‐step‐ahead forecast mean squared error matrix, where h is the forecast horizon. These criteria are minimized over a truncation order nT under the assumption that an infinite‐order vector autoregression can be approximated, under suitable conditions, with a sequence of truncated models, where nT is increasing with sample size. Using finite‐order vector autoregressive models with various persistent levels and realistic sample sizes, Monte Carlo simulations show that, overall, our criteria outperform conventional competitors. Specifically, they tend to yield better small‐sample distribution of the lag‐order estimates around the true value, while estimating it with relatively satisfactory probabilities. They also produce more efficient multistep (and even stepwise) forecasts since they yield the lowest h‐step‐ahead forecast mean squared errors for the individual components of the holding pseudo‐data to forecast. Thus estimating the actual autoregressive order as well as the best forecasting model can be achieved with the same selection procedure. Such results stand in sharp contrast to the belief that parsimony is a virtue in itself, and state that the relative accuracy of strongly consistent criteria such as the Schwarz information criterion, as claimed in the literature, is overstated. Our criteria are new tools extending those previously existing in the literature and hence can suitably be used for various practical situations when necessary. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
Most long memory forecasting studies assume that long memory is generated by the fractional difference operator. We argue that the most cited theoretical arguments for the presence of long memory do not imply the fractional difference operator and assess the performance of the autoregressive fractionally integrated moving average (ARFIMA) model when forecasting series with long memory generated by nonfractional models. We find that ARFIMA models dominate in forecast performance regardless of the long memory generating mechanism and forecast horizon. Nonetheless, forecasting uncertainty at the shortest forecast horizon could make short memory models provide suitable forecast performance, particularly for smaller degrees of memory. Additionally, we analyze the forecasting performance of the heterogeneous autoregressive (HAR) model, which imposes restrictions on high-order AR models. We find that the structure imposed by the HAR model produces better short and medium horizon forecasts than unconstrained AR models of the same order. Our results have implications for, among others, climate econometrics and financial econometrics models dealing with long memory series at different forecast horizons.  相似文献   

12.
Forecast intervals typically depend upon an assumption of normal forecast errors due to lack of information concerning the distribution of the forecast. This article applies the bootstrap to the problem of estimating forecast intervals for an AR(p) model. Box-Jenkins intervals are compared to intervals produced from a naive bootstrap and a bias-correction bootstrap. Substantial differences between the three methods are found. Bootstrapping an AR(p) model requires use of the backward residuals which typically are not i.i.d. and hence inappropriate for bootstrap resampling. A recently developed method of obtaining i.i.d. backward residuals is employed and was found to affect the bootstrap prediction intervals.  相似文献   

13.
In the paper, we undertake a detailed empirical verification of wavelet scaling as a forecasting method through its application to a large set of noisy data. The method consists of two steps. In the first, the data are smoothed with the help of wavelet estimators of stochastic signals based on the idea of scaling, and, in the second, an AR(I)MA model is built on the estimated signal. This procedure is compared with some alternative approaches encompassing exponential smoothing, moving average, AR(I)MA and regularized AR models. Special attention is given to the ways of treating boundary regions in the wavelet signal estimation and to the use of biased, weakly biased and unbiased estimators of the wavelet variance. According to a collection of popular forecast accuracy measures, when applied to noisy time series with a high level of noise, wavelet scaling is able to outperform the other forecasting procedures, although this conclusion applies mainly to longer time series and not uniformly across all the examined accuracy measures.  相似文献   

14.
This paper applies a triple‐choice ordered probit model, corrected for nonstationarity to forecast monetary decisions of the Reserve Bank of Australia. The forecast models incorporate a mix of monthly and quarterly macroeconomic time series. Forecast combination is used as an alternative to one multivariate model to improve accuracy of out‐of‐sample forecasts. This accuracy is evaluated with scoring functions, which are also used to construct adaptive weights for combining probability forecasts. This paper finds that combined forecasts outperform multivariable models. These results are robust to different sample sizes and estimation windows. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
This paper is a critical review of exponential smoothing since the original work by Brown and Holt in the 1950s. Exponential smoothing is based on a pragmatic approach to forecasting which is shared in this review. The aim is to develop state-of-the-art guidelines for application of the exponential smoothing methodology. The first part of the paper discusses the class of relatively simple models which rely on the Holt-Winters procedure for seasonal adjustment of the data. Next, we review general exponential smoothing (GES), which uses Fourier functions of time to model seasonality. The research is reviewed according to the following questions. What are the useful properties of these models? What parameters should be used? How should the models be initialized? After the review of model-building, we turn to problems in the maintenance of forecasting systems based on exponential smoothing. Topics in the maintenance area include the use of quality control models to detect bias in the forecast errors, adaptive parameters to improve the response to structural changes in the time series, and two-stage forecasting, whereby we use a model of the errors or some other model of the data to improve our initial forecasts. Some of the major conclusions: the parameter ranges and starting values typically used in practice are arbitrary and may detract from accuracy. The empirical evidence favours Holt's model for trends over that of Brown. A linear trend should be damped at long horizons. The empirical evidence favours the Holt-Winters approach to seasonal data over GES. It is difficult to justify GES in standard form–the equivalent ARIMA model is simpler and more efficient. The cumulative sum of the errors appears to be the most practical forecast monitoring device. There is no evidence that adaptive parameters improve forecast accuracy. In fact, the reverse may be true.  相似文献   

16.
This paper analyses the size and nature of the errors in GDP forecasts in the G7 countries from 1971 to 1995. These GDP short‐term forecasts are produced by the Organization for Economic Cooperation and Development and by the International Monetary Fund, and published twice a year in the Economic Outlook and in the World Economic Outlook, respectively. The evaluation of the accuracy of the forecasts is based on the properties of the difference between the realization and the forecast. A forecast is considered to be accurate if it is unbiased and efficient. A forecast is unbiased if its average deviation from the outcome is zero, and it is efficient if it reflects all the information that is available at the time the forecast is made. Finally, we also examine tests of directional accuracy and offer a non‐parametric method of assessment. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

17.
Asymmetry has been well documented in the business cycle literature. The asymmetric business cycle suggests that major macroeconomic series, such as a country's unemployment rate, are non‐linear and, therefore, the use of linear models to explain their behaviour and forecast their future values may not be appropriate. Many researchers have focused on providing evidence for the non‐linearity in the unemployment series. Only recently have there been some developments in applying non‐linear models to estimate and forecast unemployment rates. A major concern of non‐linear modelling is the model specification problem; it is very hard to test all possible non‐linear specifications, and to select the most appropriate specification for a particular model. Artificial neural network (ANN) models provide a solution to the difficulty of forecasting unemployment over the asymmetric business cycle. ANN models are non‐linear, do not rely upon the classical regression assumptions, are capable of learning the structure of all kinds of patterns in a data set with a specified degree of accuracy, and can then use this structure to forecast future values of the data. In this paper, we apply two ANN models, a back‐propagation model and a generalized regression neural network model to estimate and forecast post‐war aggregate unemployment rates in the USA, Canada, UK, France and Japan. We compare the out‐of‐sample forecast results obtained by the ANN models with those obtained by several linear and non‐linear times series models currently used in the literature. It is shown that the artificial neural network models are able to forecast the unemployment series as well as, and in some cases better than, the other univariate econometrics time series models in our test. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

18.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

19.
The purpose of the paper is to analyse the accuracy and usefulness of household subjective forecasts of personal finance. We use non‐parametric directional analysis to assess the subjective forecasts which are based on qualitative judgments. Using the British Household Panel Survey (BHPS) we are able to analyse a large number of individuals over a number of years. We also take into account individual characteristics such as gender, age, education and employment status when assessing their subjective forecasts. The paper extends the existing literature in two ways: the accuracy and usefulness of subjective forecasts, based on directional analysis, are assessed at the household level for the first time. Secondly, we adapt and extend the methods of directional analysis, which are applied to the household panel or longitudinal survey. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
This paper compares the out-of-sample forecasting accuracy of a wide class of structural, BVAR and VAR models for major sterling exchange rates over different forecast horizons. As representative structural models we employ a portfolio balance model and a modified uncovered interest parity model, with the latter producing the more accurate forecasts. Proper attention to the long-run properties and the short-run dynamics of structural models can improve on the forecasting performance of the random walk model. The structural model shows substantial improvement in medium-term forecasting accuracy, whereas the BVAR model is the more accurate in the short term. BVAR and VAR models in levels strongly out predict these models formulated in difference form at all forecast horizons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号