共查询到10条相似文献,搜索用时 0 毫秒
1.
To guarantee stable quantile estimations even for noisy data, a novel loss function and novel quantile estimators are developed, by introducing the effective concept of orthogonal loss considering the noise in both response and explanatory variables. In particular, the pinball loss used in classical quantile estimators is improved into novel orthogonal pinball loss (OPL) by replacing vertical loss by orthogonal loss. Accordingly, linear quantile regression (QR) and support vector machine quantile regression (SVMQR) can be respectively extended into novel OPL‐based QR and OPL‐based SVMQR models. The empirical study on 10 publicly available datasets statistically verifies the superiority of the two OPL‐based models over their respective original forms in terms of prediction accuracy and quantile property, especially for extreme quantiles. Furthermore, the novel OPL‐based SVMQR model with both OPL and artificial intelligence (AI) outperforms all benchmark models, which can be used as a promising quantile estimator, especially for noisy data. 相似文献
2.
Forecasts are pervasive in all areas of applications in business and daily life. Hence evaluating the accuracy of a forecast is important for both the generators and consumers of forecasts. There are two aspects in forecast evaluation: (a) measuring the accuracy of past forecasts using some summary statistics, and (b) testing the optimality properties of the forecasts through some diagnostic tests. On measuring the accuracy of a past forecast, this paper illustrates that the summary statistics used should match the loss function that was used to generate the forecast. If there is strong evidence that an asymmetric loss function has been used in the generation of a forecast, then a summary statistic that corresponds to that asymmetric loss function should be used in assessing the accuracy of the forecast instead of the popular root mean square error or mean absolute error. On testing the optimality of the forecasts, it is demonstrated how the quantile regressions set in the prediction–realization framework of Mincer and Zarnowitz (in J. Mincer (Ed.), Economic Forecasts and Expectations: Analysis of Forecasting Behavior and Performance (pp. 14–20), 1969) can be used to recover the unknown parameter that controls the potentially asymmetric loss function used in generating the past forecasts. Finally, the prediction–realization framework is applied to the Federal Reserve's economic growth forecast and forecast sharing in a PC manufacturing supply chain. It is found that the Federal Reserve values overprediction approximately 1.5 times more costly than underprediction. It is also found that the PC manufacturer weighs positive forecast errors (under forecasts) about four times as costly as negative forecast errors (over forecasts). 相似文献
3.
4.
Chia Chun Lo Konstantinos Skindilias Andreas Karathanasopoulos 《Journal of forecasting》2016,35(1):54-69
We propose a new methodology for filtering and forecasting the latent variance in a two‐factor diffusion process with jumps from a continuous‐time perspective. For this purpose we use a continuous‐time Markov chain approximation with a finite state space. Essentially, we extend Markov chain filters to processes of higher dimensions. We assess forecastability of the models under consideration by measuring forecast error of model expected realized variance, trading in variance swap contracts, producing value‐at‐risk estimates as well as examining sign forecastability. We provide empirical evidence using two sources, the S&P 500 index values and its corresponding cumulative risk‐neutral expected variance (namely the VIX index). Joint estimation reveals the market prices of equity and variance risk implicit by the two probability measures. A further simulation study shows that the proposed methodology can filter the variance of virtually any type of diffusion process (coupled with a jump process) with a non‐analytical density function. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
5.
In this study, we verify the existence of predictability in the Brazilian equity market. Unlike other studies in the same sense, which evaluate original series for each stock, we evaluate synthetic series created on the basis of linear models of stocks. Following the approach of Burgess (Computational Finance, 1999; 99, 297–312), we use the ‘stepwise regression’ model for the formation of models of each stock. We then use the variance ratio profile together with a Monte Carlo simulation for the selection of models with potential predictability using data from 1 April 1999 to 30 December 2003. Unlike the approach of Burgess, we carry out White's Reality Check (Econometrica, 2000; 68, 1097–1126) in order to verify the existence of positive returns for the period outside the sample from 2 January 2004 to 28 August 2007. We use the strategies proposed by Sullivan, Timmermann and White (Journal of Finance, 1999; 54, 1647–1691) and Hsu and Kuan (Journal of Financial Econometrics, 2005; 3, 606–628) amounting to 26,410 simulated strategies. Finally, using the bootstrap methodology, with 1000 simulations, we find strong evidence of predictability in the models, including transaction costs. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
6.
An econometric model for exchange rate based on the behavior of dynamic international asset allocation is considered. The capital movement intensity index is constructed from the adjustment of a fully hedged international portfolio. Including this index as an additional explanatory variable helps to explain the fluctuation of the exchange rate and predict better than the competing random walk model. Supporting empirical evidence is found in Germany–USA, Japan–USA, Singapore–USA and Taiwan–USA exchange markets. Copyright © 2006 John Wiley & Sons, Ltd. 相似文献
7.
Value‐at‐Risk (VaR) is widely used as a tool for measuring the market risk of asset portfolios. However, alternative VaR implementations are known to yield fairly different VaR forecasts. Hence, every use of VaR requires choosing among alternative forecasting models. This paper undertakes two case studies in model selection, for the S&P 500 index and India's NSE‐50 index, at the 95% and 99% levels. We employ a two‐stage model selection procedure. In the first stage we test a class of models for statistical accuracy. If multiple models survive rejection with the tests, we perform a second stage filtering of the surviving models using subjective loss functions. This two‐stage model selection procedure does prove to be useful in choosing a VaR model, while only incompletely addressing the problem. These case studies give us some evidence about the strengths and limitations of present knowledge on estimation and testing for VaR. Copyright © 2003 John Wiley & Sons, Ltd. 相似文献
8.
Bayesian methods for assessing the accuracy of dynamic financial value‐at‐risk (VaR) forecasts have not been considered in the literature. Such methods are proposed in this paper. Specifically, Bayes factor analogues of popular frequentist tests for independence of violations from, and for correct coverage of a time series of, dynamic quantile forecasts are developed. To evaluate the relevant marginal likelihoods, analytic integration methods are utilized when possible; otherwise multivariate adaptive quadrature methods are employed to estimate the required quantities. The usual Bayesian interval estimate for a proportion is also examined in this context. The size and power properties of the proposed methods are examined via a simulation study, illustrating favourable comparisons both overall and with their frequentist counterparts. An empirical study employs the proposed methods, in comparison with standard tests, to assess the adequacy of a range of forecasting models for VaR in several financial market data series. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
9.
Koen Berteloot Wouter Verbeke Gerd Castermans Tony Van Gestel David Martens Bart Baesens 《Journal of forecasting》2013,32(7):654-672
Modeling credit rating migrations conditional on macroeconomic conditions allows financial institutions to assess, analyze, and manage the risk related to a credit portfolio. Existing methodologies to model credit rating migrations conditional on the business cycle suffer from poor accuracy, difficult readability, or model inconsistencies. The modeling methodology proposed in this paper extends ordinal logistic regression to estimate the complete migration matrix including default rates as a function of rating dynamics and macroeconomic indicators. The gradient and Hessian derivations show efficient optimization within the Levenberg–Marquardt algorithm. The proposed modeling methodology is applied to model corporate rating migrations using historical data from 1984 to 2011. It is shown that the resulting model captures the cyclical behavior of credit rating migrations and default rates, and is able to approximate historic migration levels with good precision. The model therefore permits analysis of the impact of economical downturn conditions on a credit portfolio. Copyright © 2013 John Wiley & Sons, Ltd. 相似文献