共查询到20条相似文献,搜索用时 0 毫秒
1.
The P/E ratio is often used as a metric to compare individual stocks and the market as a whole relative to historical valuations. We examine the factors that affect changes in the inverse of the P/E ratio (E/P) over time in the broad market (S&P 500 Index). Our model includes variables that measure investor beliefs and changes in tax rates and shows that these variables are important factors affecting the P/E ratio. We extend prior work by correcting for the presence of a long‐run relation between variables included in the model. As frequently conjectured, changes in the P/E ratio have predictive power. Our model explains a large portion of the variation in E/P and accurately predicts the future direction of E/P, particularly when predicted changes in E/P are large or provide a consistent signal over more than one quarter. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
2.
Prior studies use a linear adaptive expectations model to describe how analysts revise their forecasts of future earnings in response to current forecast errors. However, research shows that extreme forecast errors are less likely than small forecast errors to persist in future years. If analysts recognize this property, their marginal forecast revisions should decrease with the forecast error's magnitude. Therefore, a linear model is likely to be unsatisfactory at describing analysts' forecast revisions. We find that a non‐linear model better describes the relation between analysts' forecast revisions and their forecast errors, and provides a richer theoretical framework for explaining analysts' forecasting behaviour. Our results are consistent with analysts' recognizing the permanent and temporary nature of forecast errors of differing magnitudes. Copyright © 2000 John Wiley & Sons, Ltd. 相似文献
3.
This paper introduces a methodology for estimating the likelihood of private information usage amongst earnings analysts. This is achieved by assuming that one group of analysts generate forecasts based on the underlying dynamics of earnings, while all other analysts are assumed to issue forecasts based on the prevailing consensus forecast. Given this behavioural dichotomy, we are able to derive (and estimate) a structural econometric model of forecast behaviour, which has implications regarding the determinants of analysts' private information endowments and forecast accuracy over the forecast horizon. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
4.
In this paper we propose and evaluate two new methods for the quantification of business surveys concerning the qualitative assessment of the state of the economy. The first is a nonparametric method based on the spectral envelope, originally proposed by Stoffer, Tyler and McDougall (Spectral analysis for categorical time series: scaling and the spectral envelope, Biometrika 80 : 611–622) to the multivariate time series of the counts in each response category. Secondly, we fit by maximum likelihood a cumulative logit unobserved components models featuring a common cycle. The conditional mean of the cycle, which can be evaluated by importance sampling, offers the required quantification. We assess the validity of the two methods by comparing the results with a standard quantification based on the balance of opinions and with a quantitative economic indicator. Copyright ? 2010 John Wiley & Sons, Ltd. 相似文献
5.
Most non‐linear techniques give good in‐sample fits to exchange rate data but are usually outperformed by random walks or random walks with drift when used for out‐of‐sample forecasting. In the case of regime‐switching models it is possible to understand why forecasts based on the true model can have higher mean squared error than those of a random walk or random walk with drift. In this paper we provide some analytical results for the case of a simple switching model, the segmented trend model. It requires only a small misclassification, when forecasting which regime the world will be in, to lose any advantage from knowing the correct model specification. To illustrate this we discuss some results for the DM/dollar exchange rate. We conjecture that the forecasting result is more general and describes limitations to the use of switching models for forecasting. This result has two implications. First, it questions the leading role of the random walk hypothesis for the spot exchange rate. Second, it suggests that the mean square error is not an appropriate way to evaluate forecast performance for non‐linear models. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
6.
On‐line monitoring of cyclical processes is studied. An important application is early prediction of the next turn in business cycles by an alarm for a turn in a leading index. Three likelihood‐based methods for detection of a turn are compared in detail. One of the methods is based on a hidden Markov model. The two others are based on the theory of statistical surveillance. One of these is free from parametric assumptions of the curve. Evaluations are made of the effect of different specifications of the curve and the transitions. The methods are made comparable by alarm limits, which give the same median time to the first false alarm, but also other approaches for comparability are discussed. Results are given on the expected delay time to a correct alarm, the probability of detection of a turning point within a specified time, and the predictive value of an alarm. Copyright © 2005 John Wiley & Sons, Ltd. 相似文献
7.
M. A. Kaboudan 《Journal of forecasting》1999,18(5):345-357
Based on the standard genetic programming (GP) paradigm, we introduce a new probability measure of time series' predictability. It is computed as a ratio of two fitness values (SSE) from GP runs. One value belongs to a subject series, while the other belongs to the same series after it is randomly shuffled. Theoretically, the boundaries of the measure are between zero and 100, where zero characterizes stochastic processes while 100 typifies predictable ones. To evaluate its performance, we first apply it to experimental data. It is then applied to eight Dow Jones stock returns. This measure may reduce model search space and produce more reliable forecast models. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
8.
Michael P. Clements Philip Hans Franses Jeremy Smith Dick van Dijk 《Journal of forecasting》2003,22(5):359-375
We compare linear autoregressive (AR) models and self‐exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two‐regime SETAR process is used as the data‐generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non‐linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data. Copyright © 2003 John Wiley & Sons, Ltd. 相似文献
9.
In this paper we introduce a new testing procedure for evaluating the rationality of fixed‐event forecasts based on a pseudo‐maximum likelihood estimator. The procedure is designed to be robust to departures in the normality assumption. A model is introduced to show that such departures are likely when forecasters experience a credibility loss when they make large changes to their forecasts. The test is illustrated using monthly fixed‐event forecasts produced by four UK institutions. Use of the robust test leads to the conclusion that certain forecasts are rational while use of the Gaussian‐based test implies that certain forecasts are irrational. The difference in the results is due to the nature of the underlying data. Copyright © 2001 John Wiley & Sons, Ltd. 相似文献
10.
Using the generalized dynamic factor model, this study constructs three predictors of crude oil price volatility: a fundamental (physical) predictor, a financial predictor, and a macroeconomic uncertainty predictor. Moreover, an event‐triggered predictor is constructed using data extracted from Google Trends. We construct GARCH‐MIDAS (generalized autoregressive conditional heteroskedasticity–mixed‐data sampling) models combining realized volatility with the predictors to predict oil price volatility at different forecasting horizons. We then identify the predictive power of the realized volatility and the predictors by the model confidence set (MCS) test. The findings show that, among the four indexes, the financial predictor has the most predictive power for crude oil volatility, which provides strong evidence that financialization has been the key determinant of crude oil price behavior since the 2008 global financial crisis. In addition, the fundamental predictor, followed by the financial predictor, effectively forecasts crude oil price volatility in the long‐run forecasting horizons. Our findings indicate that the different predictors can provide distinct predictive information at the different horizons given the specific market situation. These findings have useful implications for market traders in terms of managing crude oil price risk. 相似文献
11.
This paper explores the relationship between the Australian real estate and equity market between 1980 and 1999. The results from this study show three specific outcomes that extend the current literature on real estate finance. First, it is shown that structural shifts in stock and property markets can lead to the emergence of an unstable linear relationship between these markets. That is, full‐sample results support bi‐directional Granger causality between equity and real estate returns, whereas when sub‐samples are chosen that account for structural shifts the results generally show that changes within stock market prices influence real estate market returns, but not vice versa. Second, the results also indicate that non‐linear causality tests show a strong unidirectional relationship running from the stock market to the real estate market. Finally, from this empirical evidence a trading strategy is developed which offers superior performance when compared to adopting a passive strategy for investing in Australian securitized property. These results appear to have important implications for managing property assets in the funds management industry and also for the pricing efficiency within the Australian property market. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
12.
Using receiver operating characteristic (ROC) techniques, we evaluate the predictive content of the monthly main economic indicators (MEI) of the Organization for Economic Co‐operation and Development (OECD) for predicting both growth cycle and business cycle recessions at different horizons. From a sample that covers 123 indicators for 32 OECD countries as well as for Brazil, China, India, Indonesia, the Russian Federation, and South Africa, our results suggest that the OECD's MEI show a high overall performance in providing early signals of economic downturns worldwide, albeit they perform a bit better at anticipating business cycles than growth cycles. Although the performance for OECD and non‐OECD members is similar in terms of timeliness, the indicators are more accurate at anticipating recessions for OECD members. Finally, we find that some single indicators, such as interest rates, spreads, and credit indicators, perform even better than the composite leading indicators. 相似文献
13.
John L. Turner 《Journal of forecasting》2004,23(7):513-539
This article introduces a novel framework for analysing long‐horizon forecasting of the near non‐stationary AR(1) model. Using the local to unity specification of the autoregressive parameter, I derive the asymptotic distributions of long‐horizon forecast errors both for the unrestricted AR(1), estimated using an ordinary least squares (OLS) regression, and for the random walk (RW). I then identify functions, relating local to unity ‘drift’ to forecast horizon, such that OLS and RW forecasts share the same expected square error. OLS forecasts are preferred on one side of these ‘forecasting thresholds’, while RW forecasts are preferred on the other. In addition to explaining the relative performance of forecasts from these two models, these thresholds prove useful in developing model selection criteria that help a forecaster reduce error. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
14.
The conventional growth rate measures (such as month‐on‐month, year‐on‐year growth rates and 6‐month smoothed annualized rate adopted by the US Bureau of Labor Statistics and Economic Cycle Research Institute) are popular and can be easily obtained by computing the growth rate for monthly data based on a fixed comparison benchmark, although they do not make good use of the information underlying the economic series. By focusing on the monthly data, this paper proposes the k‐month kernel‐weighted annualized rate (k‐MKAR), which includes most existing growth rate measures as special cases. The proposed k‐MKAR measure involves the selection of smoothing parameters that are associated with the accuracy and timeliness for detecting the change in business turning points. That is, the comparison base is flexible and is likely to vary for different series under consideration. A data‐driven procedure depending upon the stepwise multiple reality check test for choosing the smoothing parameters is also suggested in this paper. The simple numerical evaluation and Monte Carlo experiment are conducted to confirm that our measures (in particular the two‐parameter k‐MKAR) improve the timeliness subject to a certain degree of accuracy. The business cycle signals issued by the Council for Economic Planning and Development over the period from 1998 to 2009 in Taiwan are taken as an example to illustrate the empirical application of our method. The empirical results show that the k‐MKAR‐based score lights are more capable of reflecting turning points earlier than the conventional year‐on‐year measure without sacrificing accuracy. Copyright © 2016 John Wiley & Sons, Ltd. 相似文献
15.
We propose a new class of limited information estimators built upon an explicit trade‐off between data fitting and a priori model specification. The estimators offer the researcher a continuum of estimators that range from an extreme emphasis on data fitting and robust reduced‐form estimation to the other extreme of exact model specification and efficient estimation. The approach used to generate the estimators illustrates why ULS often outperforms 2SLS‐PRRF even in the context of a correctly specified model, provides a new interpretation of 2SLS, and integrates Wonnacott and Wonnacott's (1970) least weighted variance estimators with other techniques. We apply the new class of estimators to Klein's Model I and generate forecasts. We find for this example that an emphasis on specification (as opposed to data fitting) produces better out‐of‐sample predictions. Copyright © 1999 John Wiley & Sons, Ltd. 相似文献
16.
Brajendra C. Sutradhar 《Journal of forecasting》2008,27(2):109-129
Forecasting for a time series of low counts, such as forecasting the number of patents to be awarded to an industry, is an important research topic in socio‐economic sectors. Recently (2004), Freeland and McCabe introduced a Gaussian type stationary correlation model‐based forecasting which appears to work well for the stationary time series of low counts. In practice, however, it may happen that the time series of counts will be non‐stationary and also the series may contain over‐dispersed counts. To develop the forecasting functions for this type of non‐stationary over‐dispersed data, the paper provides an extension of the stationary correlation models for Poisson counts to the non‐stationary correlation models for negative binomial counts. The forecasting methodology appears to work well, for example, for a US time series of polio counts, whereas the existing Bayesian methods of forecasting appear to encounter serious convergence problems. Further, a simulation study is conducted to examine the performance of the proposed forecasting functions, which appear to work well irrespective of whether the time series contains small or large counts. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
17.
In this study, a non‐stationary Markov chain model and a vector autoregressive moving average with exogenous variables coupled with a logistic function (VARMAX‐L) are used to analyze and predict the stability of a retail mortgage portfolio, based on the stress test framework. The method introduced in this paper can be used to forecast the transition probabilities in a retail mortgage over pre‐specified states, given a shock with a certain magnitude. Hence this method provides a dynamic picture of the portfolio transition process through which one can assess its behavior over time. While the paper concentrates on retail mortgages, the methodology of this study can be adapted also to analyze other credit products in banks. Copyright © 2015 John Wiley & Sons, Ltd. 相似文献
18.
A non‐linear dynamic model is introduced for multiplicative seasonal time series that follows and extends the X‐11 paradigm where the observed time series is a product of trend, seasonal and irregular factors. A selection of standard seasonal and trend component models used in additive dynamic time series models are adapted for the multiplicative framework and a non‐linear filtering procedure is proposed. The results are illustrated and compared to X‐11 and log‐additive models using real data. In particular it is shown that the new procedures do not suffer from the trend bias present in log‐additive models. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
19.
Kerry Patterson 《Journal of forecasting》2002,21(4):245-264
Much published data is subject to a process of revision due, for example, to additional source data, which generates multiple vintages of data on the same generic variable, a process termed the data measurement process or DMP. This article is concerned with several interrelated aspects of the DMP for UK Gross National Product. Relevant questions include the following. Is the DMP well behaved in the sense of providing a single stochastic trend in the vector time series of vintages? Is one of the vintages of data, for example the ‘final’, the sole vintage generating the long‐memory component? Does the multivariate framework proposed here add to the debate on the existence of a unit root in GNP? The likely implicit assumptions of users (that the DMP is well behaved and the final vintage is ‘best’) can be cast in terms of testable hypotheses; and we show that these ‘standard’ assumptions have not always been empirically founded. Copyright © 2002 John Wiley & Sons, Ltd. 相似文献
20.
The primary goal of this study was to propose an algorithm using mathematical programming to detect earnings management practices. In order to evaluate the ability of this proposed algorithm, the traditional statistical models are used as a benchmark vis‐à‐vis their time series counterparts. As emerging techniques in the area of mathematical programming yield better results, application of suitable models is expected to result in highly performed forecasts. The motivation behind this paper is to develop an algorithm which will succeed in detecting companies that appeal to financial manipulation. The methodology is based on cutting plane formulation using mathematical programming. A sample of 126 Turkish manufacturing firms described over 10 financial ratios and indexes are used for detecting factors associated with false financial statements. The results indicate that the proposed three‐phase cutting plane algorithm outperforms the traditional statistical techniques which are widely used for false financial statement detections. Furthermore, the results indicate that the investigation of financial information can be helpful towards the identification of false financial statements and highlight the importance of financial ratios/indexes such as Days' Sales in Receivables Index (DSRI), Gross Margin Index (GMI), Working Capital Accruals to Total Assets (TATA) and Days to Inventory Index (DINV). Copyright © 2009 John Wiley & Sons, Ltd. 相似文献