首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 723 毫秒
1.
Econometric prediction accuracy for personal income forecasts is examined for a region of the United States. Previously published regional structural equation model (RSEM) forecasts exist ex ante for the state of New Mexico and its three largest metropolitan statistical areas: Albuquerque, Las Cruces and Santa Fe. Quarterly data between 1983 and 2000 are utilized at the state level. For Albuquerque, annual data from 1983 through 1999 are used. For Las Cruces and Santa Fe, annual data from 1990 through 1999 are employed. Univariate time series, vector autoregressions and random walks are used as the comparison criteria against structural equation simulations. Results indicate that ex ante RSEM forecasts achieved higher accuracy than those simulations associated with univariate ARIMA and random walk benchmarks for the state of New Mexico. The track records of the structural econometric models for Albuquerque, Las Cruces and Santa Fe are less impressive. In some cases, VAR benchmarks prove more reliable than RSEM income forecasts. In other cases, the RSEM forecasts are less accurate than random walk alternatives. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
Using the method of ARIMA forecasting with benchmarks developed in this paper, it is possible to obtain forecasts which take into account the historical information of a series, captured by an ARIMA model (Box and Jenkins, 1970), as well as partial prior information about the forecasts. Prior information takes the form of benchmarks. These originate from the advice of experts, from forecasts of an annual econometric model or simply from pessimistic, realistic or optimistic scenarios contemplated by the analyst of the current economic situation. The benchmarks may represent annual levels to be achieved, neighbourhoods to be reached for a given time period, movements to be displayed or more generally any linear criteria to be satisfied by the forecasted values. The forecaster may then exercise his current economic evaluation and judgement to the fullest extent in deriving forecasts, since the laboriousness experienced without a systematic method is avoided.  相似文献   

3.
This paper aims to assess whether Google search data are useful when predicting the US unemployment rate among other more traditional predictor variables. A weekly Google index is derived from the keyword “unemployment” and is used in diffusion index variants along with the weekly number of initial claims and monthly estimated latent factors. The unemployment rate forecasts are generated using MIDAS regression models that take into account the actual frequencies of the predictor variables. The forecasts are made in real time, and the forecasts of the best forecasting models exceed, for the most part, the root mean squared forecast error of two benchmarks. However, as the forecasting horizon increases, the forecasting performance of the best diffusion index variants decreases over time, which suggests that the forecasting methods proposed in this paper are most useful in the short term.  相似文献   

4.
We study the performance of recently developed linear regression models for interval data when it comes to forecasting the uncertainty surrounding future stock returns. These interval data models use easy‐to‐compute daily return intervals during the modeling, estimation and forecasting stage. They have to stand up to comparable point‐data models of the well‐known capital asset pricing model type—which employ single daily returns based on successive closing prices and might allow for GARCH effects—in a comprehensive out‐of‐sample forecasting competition. The latter comprises roughly 1000 daily observations on all 30 stocks that constitute the DAX, Germany's main stock index, for a period covering both the calm market phase before and the more turbulent times during the recent financial crisis. The interval data models clearly outperform simple random walk benchmarks as well as the point‐data competitors in the great majority of cases. This result does not only hold when one‐day‐ahead forecasts of the conditional variance are considered, but is even more evident when the focus is on forecasting the width or the exact location of the next day's return interval. Regression models based on interval arithmetic thus prove to be a promising alternative to established point‐data volatility forecasting tools. Copyright ©2015 John Wiley & Sons, Ltd.  相似文献   

5.
We perform Bayesian model averaging across different regressions selected from a set of predictors that includes lags of realized volatility, financial and macroeconomic variables. In our model average, we entertain different channels of instability by either incorporating breaks in the regression coefficients of each individual model within our model average, breaks in the conditional error variance, or both. Changes in these parameters are driven by mixture distributions for state innovations (MIA) of linear Gaussian state‐space models. This framework allows us to compare models that assume small and frequent as well as models that assume large but rare changes in the conditional mean and variance parameters. Results using S&P 500 monthly and quarterly realized volatility data from 1960 to 2014 suggest that Bayesian model averaging in combination with breaks in the regression coefficients and the error variance through MIA dynamics generates statistically significantly more accurate forecasts than the benchmark autoregressive model. However, compared to a MIA autoregression with breaks in the regression coefficients and the error variance, we fail to provide any drastic improvements.  相似文献   

6.
This paper constructs a forecast method that obtains long‐horizon forecasts with improved performance through modification of the direct forecast approach. Direct forecasts are more robust to model misspecification compared to iterated forecasts, which makes them preferable in long horizons. However, direct forecast estimates tend to have jagged shapes across horizons. Our forecast method aims to “smooth out” erratic estimates across horizons while maintaining the robust aspect of direct forecasts through ridge regression, which is a restricted regression on the first differences of regression coefficients. The forecasts are compared to the conventional iterated and direct forecasts in two empirical applications: real oil prices and US macroeconomic series. In both applications, our method shows improvement over direct forecasts.  相似文献   

7.
We observe that daily highs and lows of stock prices do not diverge over time and, hence, adopt the cointegration concept and the related vector error correction model (VECM) to model the daily high, the daily low, and the associated daily range data. The in‐sample results attest to the importance of incorporating high–low interactions in modeling the range variable. In evaluating the out‐of‐sample forecast performance using both mean‐squared forecast error and direction of change criteria, it is found that the VECM‐based low and high forecasts offer some advantages over alternative forecasts. The VECM‐based range forecasts, on the other hand, do not always dominate—the forecast rankings depend on the choice of evaluation criterion and the variables being forecast. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
In this paper, we apply Bayesian inference to model and forecast intraday trading volume, using autoregressive conditional volume (ACV) models, and we evaluate the quality of volume point forecasts. In the empirical application, we focus on the analysis of both in‐ and out‐of‐sample performance of Bayesian ACV models estimated for 2‐minute trading volume data for stocks quoted on the Warsaw Stock Exchange in Poland. We calculate two types of point forecasts, using either expected values or medians of predictive distributions. We conclude that, in general, all considered models generate significantly biased forecasts. We also observe that the considered models significantly outperform such benchmarks as the naïve or rolling means forecasts. Moreover, in terms of root mean squared forecast errors, point predictions obtained within the ACV model with exponential distribution emerge superior compared to those calculated in structures with more general innovation distributions, although in many cases this characteristic turns out to be statistically insignificant. On the other hand, when comparing mean absolute forecast errors, the median forecasts obtained within the ACV models with Burr and generalized gamma distribution are found to be statistically better than other forecasts.  相似文献   

9.
We investigate the accuracy of capital investment predictors from a national business survey of South African manufacturing. Based on data available to correspondents at the time of survey completion, we propose variables that might inform the confidence that can be attached to their predictions. Having calibrated the survey predictors' directional accuracy, we model the probability of a correct directional prediction using logistic regression with the proposed variables. For point forecasting, we compare the accuracy of rescaled survey forecasts with time series benchmarks and some survey/time series hybrid models. In addition, using the same set of variables, we model the magnitude of survey prediction errors. Directional forecast tests showed that three out of four survey predictors have value but are biased and inefficient. For shorter horizons we found that survey forecasts, enhanced by time series data, significantly improved point forecasting accuracy. For longer horizons the survey predictors were at least as accurate as alternatives. The usefulness of the more accurate of the predictors examined is enhanced by auxiliary information, namely the probability of directional accuracy and the estimated error magnitude.  相似文献   

10.
This paper gives a brief survey of forecasting with panel data. It begins with a simple error component regression model and surveys the best linear unbiased prediction under various assumptions of the disturbance term. This includes various ARMA models as well as spatial autoregressive models. The paper also surveys how these forecasts have been used in panel data applications, running horse races between heterogeneous and homogeneous panel data models using out‐of‐sample forecasts. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

11.
This article introduces a novel framework for analysing long‐horizon forecasting of the near non‐stationary AR(1) model. Using the local to unity specification of the autoregressive parameter, I derive the asymptotic distributions of long‐horizon forecast errors both for the unrestricted AR(1), estimated using an ordinary least squares (OLS) regression, and for the random walk (RW). I then identify functions, relating local to unity ‘drift’ to forecast horizon, such that OLS and RW forecasts share the same expected square error. OLS forecasts are preferred on one side of these ‘forecasting thresholds’, while RW forecasts are preferred on the other. In addition to explaining the relative performance of forecasts from these two models, these thresholds prove useful in developing model selection criteria that help a forecaster reduce error. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

12.
This article discusses the use of Bayesian methods for inference and forecasting in dynamic term structure models through integrated nested Laplace approximations (INLA). This method of analytical approximation allows accurate inferences for latent factors, parameters and forecasts in dynamic models with reduced computational cost. In the estimation of dynamic term structure models it also avoids some simplifications in the inference procedures, such as the inefficient two‐step ordinary least squares (OLS) estimation. The results obtained in the estimation of the dynamic Nelson–Siegel model indicate that this method performs more accurate out‐of‐sample forecasts compared to the methods of two‐stage estimation by OLS and also Bayesian estimation methods using Markov chain Monte Carlo (MCMC). These analytical approaches also allow efficient calculation of measures of model selection such as generalized cross‐validation and marginal likelihood, which may be computationally prohibitive in MCMC estimations. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
In this article, we propose a regression model for sparse high‐dimensional data from aggregated store‐level sales data. The modeling procedure includes two sub‐models of topic model and hierarchical factor regressions. These are applied in sequence to accommodate high dimensionality and sparseness and facilitate managerial interpretation. First, the topic model is applied to aggregated data to decompose the daily aggregated sales volume of a product into sub‐sales for several topics by allocating each unit sale (“word” in text analysis) in a day (“document”) into a topic based on joint‐purchase information. This stage reduces the dimensionality of data inside topics because the topic distribution is nonuniform and product sales are mostly allocated into smaller numbers of topics. Next, the market response regression model for the topic is estimated from information about items in the same topic. The hierarchical factor regression model we introduce, based on canonical correlation analysis for original high‐dimensional sample spaces, further reduces the dimensionality within topics. Feature selection is then performed on the basis of the credible interval of the parameters' posterior density. Empirical results show that (i) our model allows managerial implications from topic‐wise market responses according to the particular context, and (ii) it performs better than do conventional category regressions in both in‐sample and out‐of‐sample forecasts.  相似文献   

14.
This is a case study of a closely managed product. Its purpose is to determine whether time-series methods can be appropriate for business planning. By appropriate, we mean two things: whether these methods can model and estimate the special events or features that are often present in sales data; and whether they can forecast accurately enough one, two and four quarters ahead to be useful for business planning. We use two time-series methods, Box-Jenkins modeling and Holt-Winters adaptive forecasting, to obtain forecasts of shipments of a closely managed product. We show how Box-Jenkins transfer-function models can account for the special events in the data. We develop criteria for choosing a final model which differ from the usual methods and are specifically directed towards maximizing the accuracy of next-quarter, next-half-year and next-full-year forecasts. We find that the best Box-Jenkins models give forecasts which are clearly better than those obtained from Holt-Winters forecast functions, and are also better than the judgmental forecasts of IBM's own planners. In conclusion, we judge that Box-Jenkins models can be appropriate for business planning, in particular for determining at the end of the year baseline business-as-usual annual and monthly forecasts for the next year, and in mid-year for resetting the remaining monthly forecasts.  相似文献   

15.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
The purpose of this paper is twofold. Firstly, to assess the merit of estimating probability density functions rather than level or classification estimations on a one‐day‐ahead forecasting task of the EUR/USD time series. This is implemented using a Gaussian mixture model neural network, benchmarking the results against standard forecasting models, namely a naïve model, a moving average convergence divergence technical model (MACD), an autoregressive moving average model (ARMA), a logistic regression model (LOGIT) and a multi‐layer perceptron network (MLP). Secondly, to examine the possibilities of improving the trading performance of those models with confirmation filters and leverage. While the benchmark models perform best without confirmation filters and leverage, the Gaussian mixture model outperforms all of the benchmarks when taking advantage of the possibilities offered by a combination of more sophisticated trading strategies and leverage. This might be due to the ability of the Gaussian mixture model to identify successfully trades with a high Sharpe ratio. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

17.
This paper proposes an algorithm that uses forecast encompassing tests for combining forecasts when there are a large number of forecasts that might enter the combination. The algorithm excludes a forecast from the combination if it is encompassed by another forecast. To assess the usefulness of this approach, an extensive empirical analysis is undertaken using a US macroeconomic dataset. The results are encouraging; the algorithm forecasts outperform benchmark model forecasts, in a mean square error (MSE) sense, in a majority of cases. The paper also compares the empirical performance of different approaches to forecast combination, and provides a rule‐of‐thumb cut‐off point for the thick‐modeling approach. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
Recent studies have shown that composite forecasting produces superior forecasts when compared to individual forecasts. This paper extends the existing literature by employing linear constraints and robust regression techniques in composite model building. Security analysts forecasts may be improved when combined with time series forecasts for a diversified sample of 261 firms with a 1980-1982 post-sample estimation period. The mean square error of analyst forecasts may be reduced by combining analyst and univariate time series model forecasts in constrained and unconstrained ordinary least squares regression models. These reductions are very interesting when one finds that the univariate time series model forecasts do not substantially deviate from those produced by ARIMA (0,1,1) processes. Moreover, security analysts' forecast errors may be significantly reduced when constrained and unconstrained robust regression analyses are employed.  相似文献   

19.
This paper discusses the use of preliminary data in econometric forecasting. The standard practice is to ignore the distinction between preliminary and final data, the forecasts that do so here being termed naïve forecasts. It is shown that in dynamic models a multistep‐ahead naïve forecast can achieve a lower mean square error than a single‐step‐ahead one, as it is less affected by the measurement noise embedded in the preliminary observations. The minimum mean square error forecasts are obtained by optimally combining the information provided by the model and the new information contained in the preliminary data, which can be done within the state space framework as suggested in numerous papers. Here two simple, in general suboptimal, methods of combining the two sources of information are considered: modifying the forecast initial conditions by means of standard regressions and using intercept corrections. The issues are explored using Italian national accounts data and the Bank of Italy Quarterly Econometric Model. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

20.
This paper presents a comparative analysis of linear and mixed models for short‐term forecasting of a real data series with a high percentage of missing data. Data are the series of significant wave heights registered at regular periods of three hours by a buoy placed in the Bay of Biscay. The series is interpolated with a linear predictor which minimizes the forecast mean square error. The linear models are seasonal ARIMA models and the mixed models have a linear component and a non‐linear seasonal component. The non‐linear component is estimated by a non‐parametric regression of data versus time. Short‐term forecasts, no more than two days ahead, are of interest because they can be used by the port authorities to notify the fleet. Several models are fitted and compared by their forecasting behaviour. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号