首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

2.
In this article we model the log of the US inflation rate by means of fractionally integrated processes. We use the tests of Robinson (1994) for testing this type of hypothesis, which include, as particular cases, the I(0) and I(1) specifications, and which also, unusually, have standard null and local limit distributions. A model selection criterion is established to determine which may be the best model specification of the series, and the forecasting properties of the selected models are also examined. The results vary substantially depending on how we specify the disturbances. Thus, if they are white noise, the series is I(d) with d fluctuating around 0.25; however, imposing autoregressive disturbances, the log of the US inflation rate seems to be anti‐persistent, with an order of integration smaller than zero. Looking at the forecasting properties, those models based on autocorrelated disturbances (with d < 0) predict better over a short horizon, while those based on white noise disturbances (with d > 0) seem to predict better over longer periods of time. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

3.
Prior studies use a linear adaptive expectations model to describe how analysts revise their forecasts of future earnings in response to current forecast errors. However, research shows that extreme forecast errors are less likely than small forecast errors to persist in future years. If analysts recognize this property, their marginal forecast revisions should decrease with the forecast error's magnitude. Therefore, a linear model is likely to be unsatisfactory at describing analysts' forecast revisions. We find that a non‐linear model better describes the relation between analysts' forecast revisions and their forecast errors, and provides a richer theoretical framework for explaining analysts' forecasting behaviour. Our results are consistent with analysts' recognizing the permanent and temporary nature of forecast errors of differing magnitudes. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

4.
This paper presents a methodology for modelling and forecasting multivariate time series with linear restrictions using the constrained structural state‐space framework. The model has natural applications to forecasting time series of macroeconomic/financial identities and accounts. The explicit modelling of the constraints ensures that model parameters dynamically satisfy the restrictions among items of the series, leading to more accurate and internally consistent forecasts. It is shown that the constrained model offers superior forecasting efficiency. A testable identification condition for state space models is also obtained and applied to establish the identifiability of the constrained model. The proposed methods are illustrated on Germany's quarterly monetary accounts data. Results show significant improvement in the predictive efficiency of forecast estimators for the monetary account with an overall efficiency gain of 25% over unconstrained modelling. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

5.
Financial data series are often described as exhibiting two non‐standard time series features. First, variance often changes over time, with alternating phases of high and low volatility. Such behaviour is well captured by ARCH models. Second, long memory may cause a slower decay of the autocorrelation function than would be implied by ARMA models. Fractionally integrated models have been offered as explanations. Recently, the ARFIMA–ARCH model class has been suggested as a way of coping with both phenomena simultaneously. For estimation we implement the bias correction of Cox and Reid ( 1987 ). For daily data on the Swiss 1‐month Euromarket interest rate during the period 1986–1989, the ARFIMA–ARCH (5,d,2/4) model with non‐integer d is selected by AIC. Model‐based out‐of‐sample forecasts for the mean are better than predictions based on conditionally homoscedastic white noise only for longer horizons (τ > 40). Regarding volatility forecasts, however, the selected ARFIMA–ARCH models dominate. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

6.
We extend Ohlson's (1995) model and examine the relationship between returns and residual income that incorporate analysts' earnings forecasts and other non‐earnings information variables in the balance sheet, namely default probability and agency cost of a debt covenant contract. We further divide the sample based on bankruptcy (agency) costs, earnings components and growth opportunities of a firm to explore how these factors affect the returns–residual income link. We find that the relative predictive ability for contemporaneous stock price by considering other earnings and non‐earnings information is better than that of models without non‐earnings information. If the bankruptcy (agency) cost of a firm is higher, its information role in the firm's equity valuation becomes more important and the accuracy of price prediction is therefore higher. As for non‐earnings information, if bankruptcy (agency) cost is lower, the information role becomes more relevant, and the earnings response coefficient is hence higher. Moreover, the decomposition of unexpected residual income into permanent and transitory components induces more information than that of the unexpected residual income alone. The permanent component has a larger impact than the transitory component in explaining abnormal returns. The market and industry properties and growth opportunity also have incremental explanatory power in valuation. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
In recent years an impressive array of publications has appeared claiming considerable successes of neural networks in modelling financial data but sceptical practitioners and statisticians are still raising the question of whether neural networks really are ‘a major breakthrough or just a passing fad’. A major reason for this is the lack of procedures for performing tests for misspecified models, and tests of statistical significance for the various parameters that have been estimated, which makes it difficult to assess the model's significance and the possibility that any short‐term successes that are reported might be due to ‘data mining’. In this paper we describe a methodology for neural model identification which facilitates hypothesis testing at two levels: model adequacy and variable significance. The methodology includes a model selection procedure to produce consistent estimators, a variable selection procedure based on statistical significance and a model adequacy procedure based on residuals analysis. We propose a novel, computationally efficient scheme for estimating sampling variability of arbitrarily complex statistics for neural models and apply it to variable selection. The approach is based on sampling from the asymptotic distribution of the neural model's parameters (‘parametric sampling’). Controlled simulations are used for the analysis and evaluation of our model identification methodology. A case study in tactical asset allocation is used to demonstrate how the methodology can be applied to real‐life problems in a way analogous to stepwise forward regression analysis. Neural models are contrasted to multiple linear regression. The results indicate the presence of non‐linear relationships in modelling the equity premium. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

8.
A large proportion of the world telecommunications market can be characterized as supply restricted. In ITU (1999) official waiting lists numbered about 50 million worldwide with an average waiting time of two years. More than 100 countries had not eliminated the waiting list for telephone connections and hence a supply restricted market prevailed in all of these countries. Only about 25 countries have succeeded in eradicating their waiting list for basic telephone service. In terms of the pattern of diffusion, the subscriber's flow from waiting applicants to adopters is controlled by supply restrictions adding an important dimension that needs to be addressed when modeling and forecasting demand. An empirical analysis of the diffusion of main telephones in 46 supply‐restricted countries is presented to demonstrate the usefulness of a three‐stage Bass model that has been proposed to capture the dynamics of supply restrictions. We also compare the forecasting ability of different approaches to estimation when panel data are available. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

9.
This article studies Man and Tiao's (2006) low‐order autoregressive fractionally integrated moving‐average (ARFIMA) approximation to Tsai and Chan's (2005b) limiting aggregate structure of the long‐memory process. In matching the autocorrelations, we demonstrate that the approximation works well, especially for larger d values. In computing autocorrelations over long lags for larger d value, using the exact formula one might encounter numerical problems. The use of the ARFIMA(0, d, d?1) model provides a useful alternative to compute the autocorrelations as a really close approximation. In forecasting future aggregates, we demonstrate the close performance of using the ARFIMA(0, d, d?1) model and the exact aggregate structure. In practice, this provides a justification for the use of a low‐order ARFIMA model in predicting future aggregates of long‐memory process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
An approach is proposed for obtaining estimates of the basic (disaggregated) series, xi, when only an aggregate series, yt, of k period non-overlapping sums of xi's is available. The approach is based on casting the problem in a dynamic linear model form. Then estimates of xi can be obtained by application of the Kalman filtering techniques. An ad hoc procedure is introduced for deriving a model form for the unobserved basic series from the observed model of the aggregates. An application of this approach to a set of real data is given.  相似文献   

11.
Value‐at‐Risk (VaR) is widely used as a tool for measuring the market risk of asset portfolios. However, alternative VaR implementations are known to yield fairly different VaR forecasts. Hence, every use of VaR requires choosing among alternative forecasting models. This paper undertakes two case studies in model selection, for the S&P 500 index and India's NSE‐50 index, at the 95% and 99% levels. We employ a two‐stage model selection procedure. In the first stage we test a class of models for statistical accuracy. If multiple models survive rejection with the tests, we perform a second stage filtering of the surviving models using subjective loss functions. This two‐stage model selection procedure does prove to be useful in choosing a VaR model, while only incompletely addressing the problem. These case studies give us some evidence about the strengths and limitations of present knowledge on estimation and testing for VaR. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

12.
The analysis and forecasting of electricity consumption and prices has received considerable attention over the past forty years. In the 1950s and 1960s most of these forecasts and analyses were generated by simultaneous equation econometric models. Beginning in the 1970s, there was a shift in the modeling of economic variables from the structural equations approach with strong identifying restrictions towards a joint time-series model with very few restrictions. One such model is the vector auto regression (VAR) model. It was soon discovered that the unrestricted VAR models do not forecast well. The Bayesian vector auto regression (BVAR) approach as well the error correction model (ECM) and models based on the theory of co integration have been offered as alternatives to the simple VAR model. This paper argues that the BVAF., ECM, and co integration models are simply VAR models with various restrictions placed on the coefficients. Based on this notion of a restricted VAR model, a four-step procedure for specifying VAR forecasting models is presented and then applied to monthly data on US electricity consumption and prices.  相似文献   

13.
We extend the analysis of Christoffersen and Diebold (1998) on long‐run forecasting in cointegrated systems to multicointegrated systems. For the forecast evaluation we consider several loss functions, each of which has a particular interpretation in the context of stock‐flow models where multicointegration typically occurs. A loss function based on a standard mean square forecast error (MSFE) criterion focuses on the forecast errors of the flow variables alone. Likewise, a loss function based on the triangular representation of cointegrated systems (suggested by Christoffersen and Diebold) considers forecast errors associated with changes in both stock (modelled through the cointegrating restrictions) and flow variables. We suggest a new loss function based on the triangular representation of multicointegrated systems which further penalizes deviations from the long‐run relationship between the levels of stock and flow variables as well as changes in the flow variables. Among other things, we show that if one is concerned with all possible long‐run relations between stock and flow variables, this new loss function entails high and increasing forecasting gains compared to both the standard MSFE criterion and Christoffersen and Diebold's criterion. This paper demonstrates the importance of carefully selecting loss functions in forecast evaluation of models involving stock and flow variables. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

14.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

15.
It is proved that formula for least squares extrapolation in stationary non‐linear AR(1) process is valid also for non‐stationary non‐linear AR(1) processes. This formula depends on the distribution of the corresponding white noise. If the non‐linear function used in the model is non‐decreasing and concave, upper and lower bounds are derived for least squares extrapolation such that the bounds depend only on the expectation of the white noise. It is shown in an example that the derived bounds in some cases give a good approximation to the least squares extrapolation. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

16.
This paper examines the forecasting ability of the nonlinear specifications of the market model. We propose a conditional two‐moment market model with a time‐varying systematic covariance (beta) risk in the form of a mean reverting process of the state‐space model via the Kalman filter algorithm. In addition, we account for the systematic component of co‐skewness and co‐kurtosis by considering higher moments. The analysis is implemented using data from the stock indices of several developed and emerging stock markets. The empirical findings favour the time‐varying market model approaches, which outperform linear model specifications both in terms of model fit and predictability. Precisely, higher moments are necessary for datasets that involve structural changes and/or market inefficiencies which are common in most of the emerging stock markets. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

17.
An analytical model has been developed in the present paper based on a square root transformation of white Gaussian noise. The mathematical expectation and variance of the new asymmetric distribution generated by white Gaussian noise after a square root transformation are analytically deduced from the preceding four terms of the Taylor expansion. The model was first evaluated against numerical experiments and a good agreement was obtained. The model was then used to predict time series of wind speeds and highway traffic flows. The simulation results from the new model indicate that the prediction accuracy could be improved by 0.1–1% by removing the mean errors. Further improvement could be obtained for non‐stationary time series, which had large trends. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

18.
This paper explains cross‐market variations in the degree of return predictability using the extreme bounds analysis (EBA). The EBA addresses model uncertainty in identifying robust determinant(s) of cross‐sectional return predictability. Additionally, the paper develops two profitable trading strategies based on return predictability evidence. The result reveals that among the 13 determinants of the cross‐sectional variation of return predictability, only value of stock traded (a measure of liquidity) is found to have robust explanatory power by Leamer's (1985) EBA. However, Sala‐i‐Martin's (1997) EBA reports that value of stock traded, gross domestic product (GDP) per capita, level of information and communication technology (ICT) development, governance quality, and corruption perception are robust determinants. We further find that a strategy of buying (selling) aggregate market portfolios of the countries with the highest positive (negative) return predictability statistic in the past 24 months generates statistically significant positive returns in the subsequent 3 to 12 months. In the individual country level, a trading rule of buying (selling) the respective country's aggregate market portfolio, when the return predictability statistic turns out positive (negative), outperforms the conventional buy‐and‐hold strategy for many countries.  相似文献   

19.
Predicting the direction of central banks' target interest rates is important for various market participants. This paper advances procedures for predicting the direction of the federal funds target rate using a dynamic extension of the multinomial logit model. I find that the 6‐month Treasury bill spread relative to the federal funds rate, the unemployment rate and the real GDP growth rate have superior predictive content for the direction of the target a week to several months ahead. When these variables are employed, lagged target changes do not provide additional predictive power. This suggests that the apparent positive serial dependence of the target changes is due to the Fed's systematic response to autocorrelated macroeconomic variables. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

20.
Theil's method can be applied to judgemental forecasts to remove systematic errors. However, under conditions of change the method can reduce the accuracy of forecasts by correcting for biases that no longer apply. In these circumstances, it may be worth applying an adaptive correction model which attaches a greater weight to more recent observations. This paper reports on the application of Theil's original method and a discounted weighted regression form of Theil's method (DWR) to the judgemental extrapolations made by 100 subjects in an experiment. Extrapolations were made for both stationary and non-stationary and low- and high-noise series. The results suggest DWR can lead to significant improvements in accuracy where the underlying time-series signal becomes more discernible over time or where the signal is subject to change. Theil's method appears to be most effective when a series has a high level of noise. However, while Theil's corrections seriously reduced the accuracy of judgemental extrapolations for some series the DWR method performed well under a wide range of conditions and never significantly degraded the original forecasts. © 1997 by John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号