首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 445 毫秒
1.
We introduce a versatile and robust model that may help policymakers, bond portfolio managers and financial institutions to gain insight into the future shape of the yield curve. The Burg model forecasts a 20‐day yield curve, which fits a pth‐order autoregressive (AR) model to the input signal by minimizing (least squares) the forward and backward prediction errors while constraining the autoregressive parameters to satisfy the Levinson–Durbin recursion. Then, it uses an infinite impulse response prediction error filter. Results are striking when the Burg model is compared to the Diebold and Li model: the model not only significantly improves accuracy, but also its forecast yield curves stick to the shape of observed yield curves, whether normal, humped, flat or inverted. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

2.
Motivated by the importance of coffee to Americans and the significance of the coffee subsector to the US economy, we pursue three notable innovations. First, we augment the traditional Phillips curve model with the coffee price as a predictor, and show that the resulting model outperforms the traditional variant in both in‐sample and out‐of‐sample predictability of US inflation. Second, we demonstrate the need to account for the inherent statistical features of predictors such as persistence, endogeneity, and conditional heteroskedasticity effects when dealing with US inflation. Consequently, we offer robust illustrations to show that the choice of estimator matters for improved US inflation forecasts. Third, the proposed augmented Phillips curve also outperforms time series models such as autoregressive integrated moving average and the fractionally integrated version for both in‐sample and out‐of‐sample forecasts. Our results show that augmenting the traditional Phillips curve with the urban coffee price will produce better forecast results for US inflation only when the statistical effects are captured in the estimation process. Our results are robust to alternative measures of inflation, different data frequencies, higher order moments, multiple data samples and multiple forecast horizons.  相似文献   

3.
This study investigates whether human judgement can be of value to users of industrial learning curves, either alone or in conjunction with statistical models. In a laboratory setting, it compares the forecast accuracy of a statistical model and judgemental forecasts, contingent on three factors: the amount of data available prior to forecasting, the forecasting horizon, and the availability of a decision aid (projections from a fitted learning curve). The results indicate that human judgement was better than the curve forecasts overall. Despite their lack of field experience with learning curve use, 52 of the 79 subjects outperformed the curve on the set of 120 forecasts, based on mean absolute percentage error. Human performance was statistically superior to the model when few data points were available and when forecasting further into the future. These results indicate substantial potential for human judgement to improve predictive accuracy in the industrial learning‐curve context. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

4.
In most electricity systems the residential sector is one of the main contributors to the system peak. This makes it important to know how different residential end uses, such as space heating or cooking, contribute to the system load curve at the time of system peak and also at other times of the day. In this paper we discuss the estimation of residential end-use load curves for the state of New South Wales in Australia. Half-hourly readings were taken for 15 months on the total load and a range of end-use loads of 250 households. Information was sought on 16 different end uses, while eight metering channels were available for each household. We describe the optimal design procedure used to determine which end uses to meter in each household. The econometric model used for estimating the end-use load curves integrates a conditional demand analysis (CDA) of the total load readings for the household with the readings on all the directly metered end uses. Our integrated approach achieves impressive gains in efficiency over the conventional approach to estimating end-use loads. The paper concludes with an illustration of how end-use load curves can be used to simulate a variety of policy options.  相似文献   

5.
We develop a semi‐structural model for forecasting inflation in the UK in which the New Keynesian Phillips curve (NKPC) is augmented with a time series model for marginal cost. By combining structural and time series elements we hope to reap the benefits of both approaches, namely the relatively better forecasting performance of time series models in the short run and a theory‐consistent economic interpretation of the forecast coming from the structural model. In our model we consider the hybrid version of the NKPC and use an open‐economy measure of marginal cost. The results suggest that our semi‐structural model performs better than a random‐walk forecast and most of the competing models (conventional time series models and strictly structural models) only in the short run (one quarter ahead) but it is outperformed by some of the competing models at medium and long forecast horizons (four and eight quarters ahead). In addition, the open‐economy specification of our semi‐structural model delivers more accurate forecasts than its closed‐economy alternative at all horizons. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

6.
This paper shows how monthly data and forecasts can be used in a systematic way to improve the predictive accuracy of a quarterly macroeconometric model. The problem is formulated as a model pooling procedure (equivalent to non-recursive Kalman filtering) where a baseline quarterly model forecast is modified through ‘add-factors’ or ‘constant adjustments’. The procedure ‘automatically’ constructs these adjustments in a covariance-minimizing fashion to reflect the revised expectation of the quarterly model's forecast errors, conditional on the monthly information set. Results obtained using Federal Reserve Board models indicate the potential for significant reduction in forecast error variance through application of these procedures.  相似文献   

7.
As part of the Fed's daily operating procedure, the Federal Reserve Bank of New York, the Board of Governors and the Treasury make a forecast of that day's Treasury balance at the Fed. These forecasts are an integral part of the Fed's daily operating procedure. Errors in these forecasts can generate variation in reserve supply and, consequently, the federal funds rate. This paper evaluates the accuracy of these forecasts. The evidence suggests that each agency's forecast contributes to the optimal, i.e., minimum variance, forecast and that the Trading Desk of the Federal Reserve Bank of New York incorporates information from all three of the agency forecasts in conducting daily open market operations. Moreover, these forecasts encompass the forecast of an economic model. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

8.
Daily electricity consumption data, available almost in real time, can be used in Italy to estimate the level of industrial production in any given month before the month is over. We present a number of procedures that do this using electricity consumption in the first 14 days of the month. (This is an extension of a previous model that used monthly electricity data.) We show that, with a number of adjustments, a model using half-monthly electricity data generates acceptable estimates of the monthly production index. More precisely, these estimates are more accurate than univariate forecasts but less accurate than estimates based on monthly electricity data. A further improvement can be obtained by combining ‘half-monthly’ electricity-based estimates with univariate forecasts. We also present quarterly estimates and discuss confidence intervals for various types of forecasts.  相似文献   

9.
This paper finds the yield curve to have a well-performing ability to forecast the real gross domestic product growth in the USA, compared to professional forecasters and time series models. Past studies have different arguments concerning growth lags, structural breaks, and ultimately the ability of the yield curve to forecast economic growth. This paper finds such results to be dependent on the estimation and forecasting techniques employed. By allowing various interest rates to act as explanatory variables and various window sizes for the out-of-sample forecasts, significant forecasts from many window sizes can be found. These seemingly good forecasts may face issues, including persistent forecasting errors. However, by using statistical learning algorithms, such issues can be cured to some extent. The overall result suggests, by scientifically deciding the window sizes, interest rate data, and learning algorithms, many outperforming forecasts can be produced for all lags from one quarter to 3 years, although some may be worse than the others due to the irreducible noise of the data.  相似文献   

10.
The analysis and forecasting of electricity consumption and prices has received considerable attention over the past forty years. In the 1950s and 1960s most of these forecasts and analyses were generated by simultaneous equation econometric models. Beginning in the 1970s, there was a shift in the modeling of economic variables from the structural equations approach with strong identifying restrictions towards a joint time-series model with very few restrictions. One such model is the vector auto regression (VAR) model. It was soon discovered that the unrestricted VAR models do not forecast well. The Bayesian vector auto regression (BVAR) approach as well the error correction model (ECM) and models based on the theory of co integration have been offered as alternatives to the simple VAR model. This paper argues that the BVAF., ECM, and co integration models are simply VAR models with various restrictions placed on the coefficients. Based on this notion of a restricted VAR model, a four-step procedure for specifying VAR forecasting models is presented and then applied to monthly data on US electricity consumption and prices.  相似文献   

11.
Given a nonlinear model, a probabilistic forecast may be obtained by Monte Carlo simulations. At a given forecast horizon, Monte Carlo simulations yield sets of discrete forecasts, which can be converted to density forecasts. The resulting density forecasts will inevitably be downgraded by model misspecification. In order to enhance the quality of the density forecasts, one can mix them with the unconditional density. This paper examines the value of combining conditional density forecasts with the unconditional density. The findings have positive implications for issuing early warnings in different disciplines including economics and meteorology, but UK inflation forecasts are considered as an example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

12.
We test the extent to which political manoeuvrings can be the sources of measurement errors in forecasts. Our objective is to examine the forecast error based on a simple model in which we attempt to explain deviations between the March budget forecast and the November forecast, and deviations between the outcome and the March budget forecast in the UK. The analysis is based on forecasts made by the general government. We use the forecasts of the variables as alternatives to the outcomes. We also test for political spins in the GDP forecast updates and the GDP forecast errors. We find evidence of partisan and electoral effects in forecast updates and forecast errors. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
The paper considers the use of information by a panel of expert industry forecasters, focusing on their information-processing biases. The panel forecasts construction output by sector up to three years ahead. It is found that the biases observed in laboratory experiments, particularly ‘anchoring’, are observable. The expectations are formed by adjusting the previous forecast to take new information into account. By analysing forecast errors it is concluded that the panel overweight recently released information and do not understand the dynamics of the industry. However, their forecasts, both short and long term, are better than an alternative econometric model, and combining the two sources of forecasts leads to a deterioration in forecast accuracy. The expert forecasts can be ‘de-biased’, and this leads to the conclusion that it is better to optimally process information sources than to combine (optimally) alternative forecasts.  相似文献   

14.
Time-series data are often contaminated with outliers due to the influence of unusual and non-repetitive events. Forecast accuracy in such situations is reduced due to (1) a carry-over effect of the outlier on the point forecast and (2) a bias in the estimates of model parameters. Hillmer (1984) and Ledolter (1989) studied the effect of additive outliers on forecasts. It was found that forecast intervals are quite sensitive to additive outliers, but that point forecasts are largely unaffected unless the outlier occurs near the forecast origin. In such a situation the carry-over effect of the outlier can be quite substantial. In this study, we investigate the issues of forecasting when outliers occur near or at the forecast origin. We propose a strategy which first estimates the model parameters and outlier effects using the procedure of Chen and Liu (1993) to reduce the bias in the parameter estimates, and then uses a lower critical value to detect outliers near the forecast origin in the forecasting stage. One aspect of this study is on the carry-over effects of outliers on forecasts. Four types of outliers are considered: innovational outlier, additive outlier, temporary change, and level shift. The effects due to a misidentification of an outlier type are examined. The performance of the outlier detection procedure is studied for cases where outliers are near the end of the series. In such cases, we demonstrate that statistical procedures may not be able to effectively determine the outlier types due to insufficient information. Some strategies are recommended to reduce potential difficulties caused by incorrectly detected outlier types. These findings may serve as a justification for forecasting in conjunction with judgment. Two real examples are employed to illustrate the issues discussed.  相似文献   

15.
We develop a small model for forecasting inflation for the euro area using quarterly data over the period June 1973 to March 1999. The model is used to provide inflation forecasts from June 1999 to March 2002. We compare the forecasts from our model with those derived from six competing forecasting models, including autoregressions, vector autoregressions and Phillips‐curve based models. A considerable gain in forecasting performance is demonstrated using a relative root mean squared error criterion and the Diebold–Mariano test to make forecast comparisons. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
This paper introduces the idea of adjusting forecasts from a linear time series model where the adjustment relies on the assumption that this linear model is an approximation of a nonlinear time series model. This way of creating forecasts could be convenient when inference for a nonlinear model is impossible, complicated or unreliable in small samples. The size of the forecast adjustment can be based on the estimation results for the linear model and on other data properties such as the first few moments or autocorrelations. An illustration is given for a first‐order diagonal bilinear time series model, which in certain properties can be approximated by a linear ARMA(1, 1) model. For this case, the forecast adjustment is easy to derive, which is convenient as the particular bilinear model is indeed cumbersome to analyze in practice. An application to a range of inflation series for low‐income countries shows that such adjustment can lead to some improved forecasts, although the gain is small for this particular bilinear time series model.  相似文献   

17.
Since growth curves are often used to produce medium- to long-term forecasts for planning purposes, it is obviously of value to be able to associate an interval with the forecast trend. The problems in producing prediction intervals are well described by Chatfield. The additional problems in this context are the intrinsic non-linearity of the estimation procedure and the requirement for a prediction region rather than a single interval. The approaches considered are a Taylor expansion of the variance of the forecast values, an examination of the joint density of the parameter estimates, and bootstrapping. The performance of the resultant intervals is examined using simulated data sets. Prediction intervals for real data are produced to demonstrate their practical value.  相似文献   

18.
Standard statistical loss functions, such as mean‐squared error, are commonly used for evaluating financial volatility forecasts. In this paper, an alternative evaluation framework, based on probability scoring rules that can be more closely tailored to a forecast user's decision problem, is proposed. According to the decision at hand, the user specifies the economic events to be forecast, the scoring rule with which to evaluate these probability forecasts, and the subsets of the forecasts of particular interest. The volatility forecasts from a model are then transformed into probability forecasts of the relevant events and evaluated using the selected scoring rule and calibration tests. An empirical example using exchange rate data illustrates the framework and confirms that the choice of loss function directly affects the forecast evaluation results. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

19.
A Bayesian vector autoregressive (BVAR) model is developed for the Connecticut economy to forecast the unemployment rate, nonagricultural employment, real personal income, and housing permits authorized. The model includes both national and state variables. The Bayesian prior is selected on the basis of the accuracy of the out-of-sample forecasts. We find that a loose prior generally produces more accurate forecasts. The out-of-sample accuracy of the BVAR forecasts is also compared with that of forecasts from an unrestricted VAR model and of benchmark forecasts generated from univariate ARIMA models. The BVAR model generally produces the most accurate short- and long-term out-of-sample forecasts for 1988 through 1992. It also correctly predicts the direction of change.  相似文献   

20.
This paper examines the relative importance of allowing for time‐varying volatility and country interactions in a forecast model of economic activity. Allowing for these issues is done by augmenting autoregressive models of growth with cross‐country weighted averages of growth and the generalized autoregressive conditional heteroskedasticity framework. The forecasts are evaluated using statistical criteria through point and density forecasts, and an economic criterion based on forecasting recessions. The results show that, compared to an autoregressive model, both components improve forecast ability in terms of point and density forecasts, especially one‐period‐ahead forecasts, but that the forecast ability is not stable over time. The random walk model, however, still dominates in terms of forecasting recessions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号