共查询到20条相似文献,搜索用时 0 毫秒
1.
The P/E ratio is often used as a metric to compare individual stocks and the market as a whole relative to historical valuations. We examine the factors that affect changes in the inverse of the P/E ratio (E/P) over time in the broad market (S&P 500 Index). Our model includes variables that measure investor beliefs and changes in tax rates and shows that these variables are important factors affecting the P/E ratio. We extend prior work by correcting for the presence of a long‐run relation between variables included in the model. As frequently conjectured, changes in the P/E ratio have predictive power. Our model explains a large portion of the variation in E/P and accurately predicts the future direction of E/P, particularly when predicted changes in E/P are large or provide a consistent signal over more than one quarter. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
2.
Haixiang Yao;Shenghao Xia;Hao Liu; 《Journal of forecasting》2024,43(6):1770-1794
This paper proposes a cross-section long short-term memory (CS-LSTM) factor model to explore the possibility of estimating expected returns in the Chinese stock market. In contrast to previous machine-learning-based asset pricing models that make predictions directly on equity returns, CS-LSTM estimates are based on predictions of slope terms from Fama–MacBeth cross-section regressions using 16 stock characteristics as factor loadings. In line with previous studies in the context of the Chinese market, we find illiquidity and short-term momentum to be the most important factors in describing asset returns. By using 274 value-weighted portfolios as test assets, we systematically compare the performances of CS-LSTM and three other candidate models. Our CS-LSTM model consistently delivers better performance than the candidate models and beats the market at all different levels of transaction costs. In addition, we observe that assets with smaller cap are favored by the model. By repeating the empirical analysis based on the top 70% of stocks, our CS-LSTM model remains robust and consistently provides significant market-beating performance. Our findings from the CS-LSTM model have practical implications for the future development of the Chinese stock market and other emerging markets. 相似文献
3.
Chang‐Cheng Changchien Chu‐Hsiung Lin Hsien‐Chueh Peter Yang 《Journal of forecasting》2012,31(8):706-720
We propose a method approach. We use six international stock price indices and three hypothetical portfolios formed by these indices. The sample was observed daily from 1 January 1996 to 31 December 2006. Confirmed by the failure rates and backtesting developed by Kupiec (Technique for verifying the accuracy of risk measurement models. Journal of Derivatives 1995; 3 : 73–84) and Christoffersen (Evaluating interval forecasts. International Economic Review 1998; 39 : 841–862), the empirical results show that our method can considerably improve the estimation accuracy of value‐at‐risk. Thus the study establishes an effective alternative model for risk prediction and hence also provides a reliable tool for the management of portfolios. Copyright © 2011 John Wiley & Sons, Ltd. 相似文献
4.
Changes in mortality rates have an impact on the life insurance industry, the financial sector (as a significant proportion of the financial markets is driven by pension funds), governmental agencies, and decision makers and policymakers. Thus the pricing of financial, pension and insurance products that are contingent upon survival or death and which is related to the accuracy of central mortality rates is of key importance. Recently, a temperature‐related mortality (TRM) model was proposed by Seklecka et al. (Journal of Forecasting, 2017, 36(7), 824–841), and it has shown evidence of outperformance compared with the Lee and Carter (Journal of the American Statistical Association, 1992, 87, 659–671) model and several others of its extensions, when mortality‐experience data from the UK are used. There is a need for awareness, when fitting the TRM model, of model risk when assessing longevity‐related liabilities, especially when pricing long‐term annuities and pensions. In this paper, the impact of uncertainty on the various parameters involved in the model is examined. We demonstrate a number of ways to quantify model risk in the estimation of the temperature‐related parameters, the choice of the forecasting methodology, the structures of actuarial products chosen (e.g., annuity, endowment and life insurance), and the actuarial reserve. Finally, several tables and figures illustrate the main findings of this paper. 相似文献
5.
Zinan Hu;Ruicheng Yang;Sumuya Borjigin; 《Journal of forecasting》2024,43(7):2607-2634
This study develops a multi-stage stochastic model to forecast the issuance of green bonds using the Filtered Historical Simulation (FHS) method to identify the most cost-effective conditions for issuing these bonds amid various risk factors. Drawing on historical yield data and financial metrics of corporate green bonds from December 2014 to June 2023, the model considers fluctuating elements such as risk probabilities, financial risks in worst-case scenarios, and liquidity risks at upcoming issuance moments. Our findings reveal the model's effectiveness in pinpointing the lowest possible costs of issuing new green bond portfolios in the future, while also addressing expected financial risk, risk occurrence probability, and liquidity issues. The results provide issuers with the insights needed to accurately time the market, tailor bond maturities according to a corporation's future risk profile, and enhance liquidity management. Notably, our model indicates that refining the estimated probability of future risk occurrences can lead to significant savings in green bond issuance costs. This approach allows for adaptable bond issuance strategies, addresses inherent debt, and enables detailed risk management, offering substantial benefits for green enterprises navigating the complexities of future financial landscapes. 相似文献
6.
In recent years an impressive array of publications has appeared claiming considerable successes of neural networks in modelling financial data but sceptical practitioners and statisticians are still raising the question of whether neural networks really are ‘a major breakthrough or just a passing fad’. A major reason for this is the lack of procedures for performing tests for misspecified models, and tests of statistical significance for the various parameters that have been estimated, which makes it difficult to assess the model's significance and the possibility that any short‐term successes that are reported might be due to ‘data mining’. In this paper we describe a methodology for neural model identification which facilitates hypothesis testing at two levels: model adequacy and variable significance. The methodology includes a model selection procedure to produce consistent estimators, a variable selection procedure based on statistical significance and a model adequacy procedure based on residuals analysis. We propose a novel, computationally efficient scheme for estimating sampling variability of arbitrarily complex statistics for neural models and apply it to variable selection. The approach is based on sampling from the asymptotic distribution of the neural model's parameters (‘parametric sampling’). Controlled simulations are used for the analysis and evaluation of our model identification methodology. A case study in tactical asset allocation is used to demonstrate how the methodology can be applied to real‐life problems in a way analogous to stepwise forward regression analysis. Neural models are contrasted to multiple linear regression. The results indicate the presence of non‐linear relationships in modelling the equity premium. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
7.
在分析我国大型工程建设项目风险特征的基础上,构建项目风险识别框架。并通过因子分析识别出自然环境、技术、组织管理、资源管理、分包商管理、环保等6方面是大型工程建设项目的关键风险因素。此外,还研究了大型工程建设项目关键风险因素的作用机理及作用模型,并采用结构方程模型检验。 相似文献
8.
9.
Claudio Morana 《Journal of forecasting》2017,36(8):919-935
The paper investigates the determinants of the US dollar/euro within the framework of the asset pricing theory of exchange rate determination, which posits that current exchange rate fluctuations are determined by the entire path of current and future revisions in expectations about fundamentals. In this perspective, we innovate by conditioning on Fama–French and Carhart risk factors, which directly measures changing market expectations about the economic outlook, on new financial condition indexes and macroeconomic variables. The macro‐finance augmented econometric model has a remarkable in‐sample and out‐of‐sample predictive ability, largely outperforming a standard autoregressive specification. We also document a stable relationship between the US dollar/euro Carhart momentum conditional correlation (CCW) and the euro area business cycle. CCW signals a progressive weakening in economic conditions since June 2014, consistent with the scattered recovery from the sovereign debt crisis and the new Greek solvency crisis exploded in late spring/early summer 2015. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
10.
Many applications in science involve finding estimates of unobserved variables from observed data, by combining model predictions with observations. The sequential Monte Carlo (SMC) is a well‐established technique for estimating the distribution of unobserved variables that are conditional on current observations. While the SMC is very successful at estimating the first central moments, estimating the extreme quantiles of a distribution via the current SMC methods is computationally very expensive. The purpose of this paper is to develop a new framework using probability distortion. We use an SMC with distorted weights in order to make computationally efficient inferences about tail probabilities of future interest rates using the Cox–Ingersoll–Ross (CIR) model, as well as with an observed yield curve. We show that the proposed method yields acceptable estimates about tail quantiles at a fraction of the computational cost of the full Monte Carlo. 相似文献
11.
Nicholas Apergis 《Journal of forecasting》2017,36(5):557-565
This paper investigates the impact of both asset and macroeconomic forecast errors on inflation forecast errors in the USA by making use of a two‐regime model. The findings document a significant contribution of both types of forecast errors to the explanation of inflation forecast errors, with the pass‐through being stronger when these errors move within the high‐volatility regime. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
12.
Recent research has suggested that forecast evaluation on the basis of standard statistical loss functions could prefer models which are sub‐optimal when used in a practical setting. This paper explores a number of statistical models for predicting the daily volatility of several key UK financial time series. The out‐of‐sample forecasting performance of various linear and GARCH‐type models of volatility are compared with forecasts derived from a multivariate approach. The forecasts are evaluated using traditional metrics, such as mean squared error, and also by how adequately they perform in a modern risk management setting. We find that the relative accuracies of the various methods are highly sensitive to the measure used to evaluate them. Such results have implications for any econometric time series forecasts which are subsequently employed in financial decision making. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
13.
Partha Sengupta;Christopher H. Wheeler; 《Journal of forecasting》2024,43(7):2448-2477
Models developed by banks to forecast losses in their credit card portfolios have generally performed poorly during the COVID-19 pandemic, particularly in 2020, when large forecast errors were observed at many banks. In this study, we attempt to understand the source of this error and explore ways to improve model fit. We use account-level monthly performance data from the largest credit card banks in the U.S. between 2008 and 2018 to build models that mimic the typical model design employed by large banks to forecast credit card losses. We then fit these on data from 2019 to 2021. We find that COVID-period model errors can be reduced significantly through two simple modifications: (1) including measures of the macroeconomic environment beyond indicators of the labor market, which served as the primary macro drivers used in many pre-pandemic models and (2) adjusting macro drivers to capture persistent/sustained changes, as opposed to temporary volatility in these variables. These model improvements, we find, can be achieved without a significant reduction in model performance for the pre-COVID period, including the Great Recession. Moreover, in broadening the set of macro influences and capturing sustained changes, we believe models can be made more robust to future downturns, which may bear little resemblance to past recessions. 相似文献
14.
Kyran Cupido;Petar Jevtić;Tim J. Boonen; 《Journal of forecasting》2024,43(5):1321-1337
Currently, most academic research involving the mortality modeling of multiple populations mainly focuses on factor-based approaches. Increasingly, these models are enriched with socio-economic determinants. Yet these emerging mortality models come with little attention to interpretable spatial model features. Such features could be highly valuable to demographers and old-age benefit providers in need of a comprehensive understanding of the impact of economic growth on mortality across space. To address this, we propose and investigate a family of models that extend the seminal Li-Lee factor-based stochastic mortality modeling framework to include both economic growth, as measured by the real gross domestic product (GDP), and spatial patterns of the contiguous United States mortality. Model selection performed on the introduced new class of spatial models shows that based on the AIC criteria, the introduced spatial lag of GDP with GDP (SLGG) model had the best fit. The out-of-sample forecast performance of SLGG model is shown to be more accurate than the well-known Li–Lee model. When it comes to model implications, a comparison of annuity pricing across space revealed that the SLGG model admits more regional pricing differences compared to the Li-Lee model. 相似文献
15.
Predicting the accuracy rate of takeover completion is the major key to risk arbitrage returns. In emerging markets, data on takeover attempts are either unavailable or of poor quality. Therefore, this paper proposes an option‐based approach to improve the accuracy of prediction. Empirical research on Taiwan takeovers shows that by this approach, the accuracy rate is 71.15%—considerably higher than the average of 54.81% using qualitative models. There exist, on average, three opportunities to close arbitrage positions, at a time before completion dates, when the target and acquiring stock prices converge. The annualized abnormal return is 42.19% greater than it would otherwise be. Copyright © 2012 John Wiley & Sons, Ltd. 相似文献
16.
在混合交通的复杂路况下,汽车与过街行人很容易发生碰撞,本研究分析了行人过街的危险度,并且针对行人从自车前方相邻车道慢速行驶的机动车前面突然出现横穿公路并与自车发生碰撞的类型,分析了自车、慢速前行的机动车和行人的运动与驾驶员的视线之间的相互关系,建立了汽车安全车速的计算模型,从而为驾驶员安全行车提供参考. 相似文献
17.
We use state space methods to estimate a large dynamic factor model for the Norwegian economy involving 93 variables for 1978Q2–2005Q4. The model is used to obtain forecasts for 22 key variables that can be derived from the original variables by aggregation. To investigate the potential gain in using such a large information set, we compare the forecasting properties of the dynamic factor model with those of univariate benchmark models. We find that there is an overall gain in using the dynamic factor model, but that the gain is notable only for a few of the key variables. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
18.
Kjell Vaage 《Journal of forecasting》2000,19(1):23-37
A unified method to detect and handle innovational and additive outliers, and permanent and transient level changes has been presented by R. S. Tsay. N. S. Balke has found that the presence of level changes may lead to misidentification and loss of test‐power, and suggests augmenting Tsay's procedure by conducting an additional disturbance search based on a white‐noise model. While Tsay allows level changes to be either permanent or transient, Balke considers only the former type. Based on simulated series with transient level changes this paper investigates how Balke's white‐noise model performs both when transient change is omitted from the model specification and when it is included. Our findings indicate that the alleged misidentification of permanent level changes may be influenced by the restrictions imposed by Balke. But when these restrictions are removed, Balke's procedure outperforms Tsay's in detecting changes in the data‐generating process. Copyright © 2000 John Wiley & Sons, Ltd. 相似文献
19.
In this paper, we forecast local currency debt of five major emerging market countries (Brazil, Indonesia, Mexico, South Africa, and Turkey) over the period January 2010 to January 2019 (with an in-sample period: March 2005 to December 2009). We exploit information from a large set of economic and financial time series to assess the importance not only of “own-country” factors (derived from principal component and partial least squares approaches), but also create “global” predictors by combining the country-specific variables across the five emerging economies. We find that, while information on own-country factors can outperform the historical average model, global factors tend to produce not only greater statistical and economic gains, but also enhance market timing ability of investors, especially when we use the target variable (bond premium) approach under the partial least squares method to extract our factors. Our results have important implications not only for fund managers but also for policymakers. 相似文献
20.
The aim of this paper is to propose a new methodology that allows forecasting, through Vasicek and CIR models, of future expected interest rates based on rolling windows from observed financial market data. The novelty, apart from the use of those models not for pricing but for forecasting the expected rates at a given maturity, consists in an appropriate partitioning of the data sample. This allows capturing all the statistically significant time changes in volatility of interest rates, thus giving an account of jumps in market dynamics. The new approach is applied to different term structures and is tested for both models. It is shown how the proposed methodology overcomes both the usual challenges (e.g., simulating regime switching, volatility clustering, skewed tails) as well as the new ones added by the current market environment characterized by low to negative interest rates. 相似文献