首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper proposes a strategy to detect the presence of common serial cor‐ relation in large‐dimensional systems. We show that partial least squares can be used to consistently recover the common autocorrelation space. Moreover, a Monte Carlo study reveals that univariate autocorrelation tests on the factors obtained by partial least squares outperform traditional tests based on canonical correlation analysis. Some empirical applications are presented to illustrate concepts and methods. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

2.
This paper is concerned with how canonical correlation can be used to identify the structure of a linear multivariate time series model. We describe briefly methods that use the canonical correlation technique and present simulation results in order to compare and evaluate the performance of these methods. The methods are also applied to a well‐known multivariate time series. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
In multivariate volatility prediction, identifying the optimal forecasting model is not always a feasible task. This is mainly due to the curse of dimensionality typically affecting multivariate volatility models. In practice only a subset of the potentially available models can be effectively estimated, after imposing severe constraints on the dynamic structure of the volatility process. It follows that in most applications the working forecasting model can be severely misspecified. This situation leaves scope for the application of forecast combination strategies as a tool for improving the predictive accuracy. The aim of the paper is to propose some alternative combination strategies and compare their performances in forecasting high‐dimensional multivariate conditional covariance matrices for a portfolio of US stock returns. In particular, we will consider the combination of volatility predictions generated by multivariate GARCH models, based on daily returns, and dynamic models for realized covariance matrices, built from intra‐daily returns. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

4.
Long‐range persistence in volatility is widely modelled and forecast in terms of the so‐called fractional integrated models. These models are mostly applied in the univariate framework, since the extension to the multivariate context of assets portfolios, while relevant, is not straightforward. We discuss and apply a procedure which is able to forecast the multivariate volatility of a portfolio including assets with long memory. The main advantage of this model is that it is feasible enough to be applied on large‐scale portfolios, solving the problem of dealing with extremely complex likelihood functions which typically arises in this context. An application of this procedure to a portfolio of five daily exchange rate series shows that the out‐of‐sample forecasts for the multivariate volatility are improved under several loss functions when the long‐range dependence property of the portfolio assets is explicitly accounted for. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

5.
Value‐at‐risk (VaR) forecasting generally relies on a parametric density function of portfolio returns that ignores higher moments or assumes them constant. In this paper, we propose a simple approach to forecasting of a portfolio VaR. We employ the Gram‐Charlier expansion (GCE) augmenting the standard normal distribution with the first four moments, which are allowed to vary over time. In an extensive empirical study, we compare the GCE approach to other models of VaR forecasting and conclude that it provides accurate and robust estimates of the realized VaR. In spite of its simplicity, on our dataset GCE outperforms other estimates that are generated by both constant and time‐varying higher‐moments models. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

6.
Artificial neural network modelling has recently attracted much attention as a new technique for estimation and forecasting in economics and finance. The chief advantages of this new approach are that such models can usually find a solution for very complex problems, and that they are free from the assumption of linearity that is often adopted to make the traditional methods tractable. In this paper we compare the performance of Back‐Propagation Artificial Neural Network (BPN) models with the traditional econometric approaches to forecasting the inflation rate. Of the traditional econometric models we use a structural reduced‐form model, an ARIMA model, a vector autoregressive model, and a Bayesian vector autoregression model. We compare each econometric model with a hybrid BPN model which uses the same set of variables. Dynamic forecasts are compared for three different horizons: one, three and twelve months ahead. Root mean squared errors and mean absolute errors are used to compare quality of forecasts. The results show the hybrid BPN models are able to forecast as well as all the traditional econometric methods, and to outperform them in some cases. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

7.
Forecasting for a time series of low counts, such as forecasting the number of patents to be awarded to an industry, is an important research topic in socio‐economic sectors. Recently (2004), Freeland and McCabe introduced a Gaussian type stationary correlation model‐based forecasting which appears to work well for the stationary time series of low counts. In practice, however, it may happen that the time series of counts will be non‐stationary and also the series may contain over‐dispersed counts. To develop the forecasting functions for this type of non‐stationary over‐dispersed data, the paper provides an extension of the stationary correlation models for Poisson counts to the non‐stationary correlation models for negative binomial counts. The forecasting methodology appears to work well, for example, for a US time series of polio counts, whereas the existing Bayesian methods of forecasting appear to encounter serious convergence problems. Further, a simulation study is conducted to examine the performance of the proposed forecasting functions, which appear to work well irrespective of whether the time series contains small or large counts. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

8.
Reliable correlation forecasts are of paramount importance in modern risk management systems. A plethora of correlation forecasting models have been proposed in the open literature, yet their impact on the accuracy of value‐at‐risk calculations has not been explicitly investigated. In this paper, traditional and modern correlation forecasting techniques are compared using standard statistical and risk management loss functions. Three portfolios consisting of stocks, bonds and currencies are considered. We find that GARCH models can better account for the correlation's dynamic structure in the stock and bond portfolios. On the other hand, simpler specifications such as the historical mean model or simple moving average models are better suited for the currency portfolio. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

9.
We propose a new approach to the estimation of the portfolio Value‐at‐Risk. Based on the assumption that the same macroeconomic factors affect returns of all assets in a portfolio, this methodology allows the generation of the sequence of hypothetical future equilibrium portfolio returns given the historical values of the underlying macroeconomic factors and the asset betas with respect to these factors. Value‐at‐Risk is then found as an appropriate percentile of the corresponding hypothetical distribution of the portfolio profits and losses. The backtesting results for the six Fama–French benchmark portfolios and the S&P500 index show that this approach yields reasonably accurate estimates of the portfolio Value‐at‐Risk. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

10.
This paper illustrates the importance of density forecasting and forecast evaluation in portfolio decision making. The decision‐making environment is fully described for an investor seeking to optimally allocate her portfolio between long and short Treasury bills, over investment horizons of up to 2 years. We examine the impact of parameter uncertainty and predictability in bond returns on the investor's allocation and we describe how the forecasts are computed and used in this context. Both statistical and decision‐based criteria are used to assess the predictability of returns. Our results show sensitivity to the evaluation criterion used and, in the context of investment decision making under an economic value criterion, we find some potential gain for the investor from assuming predictability. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

11.
We propose an economically motivated forecast combination strategy in which model weights are related to portfolio returns obtained by a given forecast model. An empirical application based on an optimal mean–variance bond portfolio problem is used to highlight the advantages of the proposed approach with respect to combination methods based on statistical measures of forecast accuracy. We compute average net excess returns, standard deviation, and the Sharpe ratio of bond portfolios obtained with nine alternative yield curve specifications, as well as with 12 different forecast combination strategies. Return‐based forecast combination schemes clearly outperformed approaches based on statistical measures of forecast accuracy in terms of economic criteria. Moreover, return‐based approaches that dynamically select only the model with highest weight each period and discard all other models delivered even better results, evidencing not only the advantages of trimming forecast combinations but also the ability of the proposed approach to detect best‐performing models. To analyze the robustness of our results, different levels of risk aversion and a different dataset are considered.  相似文献   

12.
The variance of a portfolio can be forecast using a single index model or the covariance matrix of the portfolio. Using univariate and multivariate conditional volatility models, this paper evaluates the performance of the single index and portfolio models in forecasting value‐at‐risk (VaR) thresholds of a portfolio. Likelihood ratio tests of unconditional coverage, independence and conditional coverage of the VaR forecasts suggest that the single‐index model leads to excessive and often serially dependent violations, while the portfolio model leads to too few violations. The single‐index model also leads to lower daily Basel Accord capital charges. The univariate models which display correct conditional coverage lead to higher capital charges than models which lead to too many violations. Overall, the Basel Accord penalties appear to be too lenient and favour models which have too many violations. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
This paper discusses the asymptotic efficiency of estimators for optimal portfolios when returns are vector‐valued non‐Gaussian stationary processes. We give the asymptotic distribution of portfolio estimators ? for non‐Gaussian dependent return processes. Next we address the problem of asymptotic efficiency for the class of estimators ?. First, it is shown that there are some cases when the asymptotic variance of ? under non‐Gaussianity can be smaller than that under Gaussianity. The result shows that non‐Gaussianity of the returns does not always affect the efficiency badly. Second, we give a necessary and sufficient condition for ? to be asymptotically efficient when the return process is Gaussian, which shows that ? is not asymptotically efficient generally. From this point of view we propose to use maximum likelihood type estimators for g, which are asymptotically efficient. Furthermore, we investigate the problem of predicting the one‐step‐ahead optimal portfolio return by the estimated portfolio based on ? and examine the mean squares prediction error. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
We use an investment strategy based on firm‐level capital structures. Investing in low‐leverage firms yields abnormal returns of 4.43% per annum. If an investor holds a portfolio of low‐leverage and low‐market‐to‐book‐ratio firms, abnormal returns increase to 16.18% per annum. A portfolio of low leverage and low market risk yields abnormal returns of 6.67% and a portfolio of small firms with low leverage earns 5.37% per annum. We use the Fama‐Macbeth (1973) methodology with modifications. We confirm that portfolios based on low leverage earn higher returns in longer investment horizons. Our results are robust to other risk factors and the risk class of the firm. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
In the last decade, neural networks have emerged from an esoteric instrument in academic research to a rather common tool assisting auditors, investors, portfolio managers and investment advisors in making critical financial decisions. It is apparent that a better understanding of the network's performance and limitations would help both researchers and practitioners in analysing real‐world problems. Unlike many existing studies which focus on a single type of network architecture, this study evaluates and compares the performance of models based on two competing neural network architectures, the multi‐layered feedforward neural network (MLFN) and general regression neural network (GRNN). Our empirical evaluation measures the network models' strength on the prediction of currency exchange correlation with respect to a variety of statistical tests including RMSE, MAE, U statistic, Theil's decomposition test, Henriksson–Merton market timing test and Fair–Shiller informational content test. Results of experiments suggest that the selection of proper architectural design may contribute directly to the success in neural network forecasting. In addition, market timing tests indicate that both MLFN and GRNN models have economically significant values in predicting the exchange rate correlation. On the other hand, informational content tests discover that the neural network models based on different architectures capture useful information not found in each other and the information sets captured by the two network designs are independent of one another. An auxiliary experiment is developed and confirms the possible synergetic effect from combining forecasts made by the two different network architectures and from incorporating information from an implied correlation model into the neural network forecasts. Implied correlation and random walk models are also included in our empirical experiment for benchmark comparison. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
Forecasting with many predictors provides the opportunity to exploit a much richer base of information. However, macroeconomic time series are typically rather short, raising problems for conventional econometric models. This paper explores the use of Bayesian additive regression trees (Bart) from the machine learning literature to forecast macroeconomic time series in a predictor‐rich environment. The interest lies in forecasting nine key macroeconomic variables of interest for government budget planning, central bank policy making and business decisions. It turns out that Bart is a valuable addition to existing methods for handling high dimensional data sets in a macroeconomic context.  相似文献   

17.
It has been widely accepted that many financial and economic variables are non‐linear, and neural networks can model flexible linear or non‐linear relationships among variables. The present paper deals with an important issue: Can the many studies in the finance literature evidencing predictability of stock returns by means of linear regression be improved by a neural network? We show that the predictive accuracy can be improved by a neural network, and the results largely hold out‐of‐sample. Both the neural network and linear forecasts show significant market timing ability. While the switching portfolio based on the linear forecasts outperforms the buy‐and‐hold market portfolio under all three transaction cost scenarios, the switching portfolio based on the neural network forecasts beats the market only if there is no transaction cost. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

18.
This paper proposes the use of the bias‐corrected bootstrap for interval forecasting of an autoregressive time series with an arbitrary number of deterministic components. We use the bias‐corrected bootstrap based on two alternative bias‐correction methods: the bootstrap and an analytic formula based on asymptotic expansion. We also propose a new stationarity‐correction method, based on stable spectral factorization, as an alternative to Kilian's method exclusively used in past studies. A Monte Carlo experiment is conducted to compare small‐sample properties of prediction intervals. The results show that the bias‐corrected bootstrap prediction intervals proposed in this paper exhibit desirable small‐sample properties. It is also found that the bootstrap bias‐corrected prediction intervals based on stable spectral factorization are tighter and more stable than those based on Kilian's stationarity‐correction. The proposed methods are applied to interval forecasting for the number of tourist arrivals in Hong Kong. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

19.
The use of large datasets for macroeconomic forecasting has received a great deal of interest recently. Boosting is one possible method of using high‐dimensional data for this purpose. It is a stage‐wise additive modelling procedure, which, in a linear specification, becomes a variable selection device that iteratively adds the predictors with the largest contribution to the fit. Using data for the United States, the euro area and Germany, we assess the performance of boosting when forecasting a wide range of macroeconomic variables. Moreover, we analyse to what extent its forecasting accuracy depends on the method used for determining its key regularization parameter: the number of iterations. We find that boosting mostly outperforms the autoregressive benchmark, and that K‐fold cross‐validation works much better as stopping criterion than the commonly used information criteria. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper an investigation is made of the properties and use of two aggregate measures of forecast bias and accuracy. These are metrics used in business to calculate aggregate forecasting performance for a family (group) of products. We find that the aggregate measures are not particularly informative if some of the one‐step‐ahead forecasts are biased. This is likely to be the case in practice if frequently employed forecasting methods are used to generate a large number of individual forecasts. In the paper, examples are constructed to illustrate some potential problems in the use of the metrics. We propose a simple graphical display of forecast bias and accuracy to supplement the information yielded by the accuracy measures. This support includes relevant boxplots of measures of individual forecasting success. This tool is simple but helpful as the graphic display has the potential to indicate forecast deterioration that can be masked by one or both of the aggregate metrics. The procedures are illustrated with data representing sales of food items. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号