首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This article addresses the problem of forecasting time series that are subject to level shifts. Processes with level shifts possess a nonlinear dependence structure. Using the stochastic permanent breaks (STOPBREAK) model, I model this nonlinearity in a direct and flexible way that avoids imposing a discrete regime structure. I apply this model to the rate of price inflation in the United States, which I show is subject to level shifts. These shifts significantly affect the accuracy of out‐of‐sample forecasts, causing models that assume covariance stationarity to be substantially biased. Models that do not assume covariance stationarity, such as the random walk, are unbiased but lack precision in periods without shifts. I show that the STOPBREAK model outperforms several alternative models in an out‐of‐sample inflation forecasting experiment. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper, I use a large set of macroeconomic and financial predictors to forecast US recession periods. I adopt Bayesian methodology with shrinkage in the parameters of the probit model for the binary time series tracking the state of the economy. The in‐sample and out‐of‐sample results show that utilizing a large cross‐section of indicators yields superior US recession forecasts in comparison to a number of parsimonious benchmark models. Moreover, the data‐rich probit model gives similar accuracy to the factor‐based model for the 1‐month‐ahead forecasts, while it provides superior performance for 1‐year‐ahead predictions. Finally, in a pseudo‐real‐time application for the Great Recession, I find that the large probit model with shrinkage is able to pick up the recession signals in a timely fashion and does well in comparison to the more parsimonious specification and to nonparametric alternatives. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
In this paper, I investigate the effects of cross‐border capital flows induced by the rate of risk‐adjusted excess returns (Sharpe ratio) on the transitional dynamics of the nominal exchange rate's deviation from its fundamental value. For this purpose, a two‐state time‐varying transition probability Markov regime‐switching process is added to the sticky price exchange rate model with shares. I estimated this model using quarterly data on the four most active floating rate currencies for the years 1973–2009: the Australian dollar, Canadian dollar, Japanese yen and the British pound. The results provide evidence that the Sharpe ratios of debt and equity investments influence the evolution of transitional dynamics of the currencies' deviation from their fundamental values. In addition, I found that the relationship between economic fundamentals and the nominal exchange rates vary depending on the overvaluation or undervaluation of the currencies. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

4.
We propose a new class of limited information estimators built upon an explicit trade‐off between data fitting and a priori model specification. The estimators offer the researcher a continuum of estimators that range from an extreme emphasis on data fitting and robust reduced‐form estimation to the other extreme of exact model specification and efficient estimation. The approach used to generate the estimators illustrates why ULS often outperforms 2SLS‐PRRF even in the context of a correctly specified model, provides a new interpretation of 2SLS, and integrates Wonnacott and Wonnacott's (1970) least weighted variance estimators with other techniques. We apply the new class of estimators to Klein's Model I and generate forecasts. We find for this example that an emphasis on specification (as opposed to data fitting) produces better out‐of‐sample predictions. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

5.
Recent empirical work has considered the prediction of inflation by combining the information in a large number of time series. One such method that has been found to give consistently good results consists of simple equal‐weighted averaging of the forecasts from a large number of different models, each of which is a linear regression relating inflation to a single predictor and a lagged dependent variable. In this paper, I consider using Bayesian model averaging for pseudo out‐of‐sample prediction of US inflation, and find that it generally gives more accurate forecasts than simple equal‐weighted averaging. This superior performance is consistent across subsamples and a number of inflation measures. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

6.
This article introduces a novel framework for analysing long‐horizon forecasting of the near non‐stationary AR(1) model. Using the local to unity specification of the autoregressive parameter, I derive the asymptotic distributions of long‐horizon forecast errors both for the unrestricted AR(1), estimated using an ordinary least squares (OLS) regression, and for the random walk (RW). I then identify functions, relating local to unity ‘drift’ to forecast horizon, such that OLS and RW forecasts share the same expected square error. OLS forecasts are preferred on one side of these ‘forecasting thresholds’, while RW forecasts are preferred on the other. In addition to explaining the relative performance of forecasts from these two models, these thresholds prove useful in developing model selection criteria that help a forecaster reduce error. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

7.
In this article we model the log of the US inflation rate by means of fractionally integrated processes. We use the tests of Robinson (1994) for testing this type of hypothesis, which include, as particular cases, the I(0) and I(1) specifications, and which also, unusually, have standard null and local limit distributions. A model selection criterion is established to determine which may be the best model specification of the series, and the forecasting properties of the selected models are also examined. The results vary substantially depending on how we specify the disturbances. Thus, if they are white noise, the series is I(d) with d fluctuating around 0.25; however, imposing autoregressive disturbances, the log of the US inflation rate seems to be anti‐persistent, with an order of integration smaller than zero. Looking at the forecasting properties, those models based on autocorrelated disturbances (with d < 0) predict better over a short horizon, while those based on white noise disturbances (with d > 0) seem to predict better over longer periods of time. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
9.
Using factors in forecasting exercises reduces the dimensionality of the covariates set and, therefore, allows the forecaster to explore possible nonlinearities in the model. For an American macroeconomic dataset, I present evidence that the employment of nonlinear estimation methods can improve the out‐of‐sample forecasting accuracy for some macroeconomic variables, such as industrial production, employment, and Fed fund rate. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper we develop a semi‐parametric approach to model nonlinear relationships in serially correlated data. To illustrate the usefulness of this approach, we apply it to a set of hourly electricity load data. This approach takes into consideration the effect of temperature combined with those of time‐of‐day and type‐of‐day via nonparametric estimation. In addition, an ARIMA model is used to model the serial correlation in the data. An iterative backfitting algorithm is used to estimate the model. Post‐sample forecasting performance is evaluated and comparative results are presented. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

11.
Auditors must assess their clients' ability to function as a going concern for at least the year following the financial statement date. The audit profession has been severely criticized for failure to ‘blow the whistle’ in numerous highly visible bankruptcies that occurred shortly after unmodified audit opinions were issued. Financial distress indicators examined in this study are one mechanism for making such assessments. This study measures and compares the predictive accuracy of an easily implemented two‐variable bankruptcy model originally developed using recursive partitioning on an equally proportioned data set of 202 firms. In this study, we test the predictive accuracy of this model, as well as previously developed logit and neural network models, using a realistically proportioned set of 14,212 firms' financial data covering the period 1981–1990. The previously developed recursive partitioning model had an overall accuracy for all firms ranging from 95 to 97% which outperformed both the logit model at 93 to 94% and the neural network model at 86 to 91%. The recursive partitioning model predicted the bankrupt firms with 33–58% accuracy. A sensitivity analysis of recursive partitioning cutting points indicated that a newly specified model could achieve an all firm and a bankrupt firm predictive accuracy of approximately 85%. Auditors will be interested in the Type I and Type II error tradeoffs revealed in a detailed sensitivity table for this easily implemented model. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

12.
In this paper we investigate the impact of data revisions on forecasting and model selection procedures. A linear ARMA model and nonlinear SETAR model are considered in this study. Two Canadian macroeconomic time series have been analyzed: the real‐time monetary aggregate M3 (1977–2000) and residential mortgage credit (1975–1998). The forecasting method we use is multi‐step‐ahead non‐adaptive forecasting. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

13.
A long‐standing puzzle to financial economists is the difficulty of outperforming the benchmark random walk model in out‐of‐sample contests. Using data from the USA over the period of 1872–2007, this paper re‐examines the out‐of‐sample predictability of real stock prices based on price–dividend (PD) ratios. The current research focuses on the significance of the time‐varying mean and nonlinear dynamics of PD ratios in the empirical analysis. Empirical results support the proposed nonlinear model of the PD ratio and the stationarity of the trend‐adjusted PD ratio. Furthermore, this paper rejects the non‐predictability hypothesis of stock prices statistically based on in‐ and out‐of‐sample tests and economically based on the criteria of expected real return per unit of risk. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

14.
Transfer function or distributed lag models are commonly used in forecasting. The stability of a constant‐coefficient transfer function model, however, may become an issue for many economic variables due in part to the recent advance in technology and improvement in efficiency in data collection and processing. In this paper, we propose a simple functional‐coefficient transfer function model that can accommodate the changing environment. A likelihood ratio statistic is used to test the stability of a traditional transfer function model. We investigate the performance of the test statistic in the finite sample case via simulation. Using some well‐known examples, we demonstrate clearly that the proposed functional‐coefficient model can substantially improve the accuracy of out‐of‐sample forecasts. In particular, our simple modification results in a 25% reduction in the mean squared errors of out‐of‐sample one‐step‐ahead forecasts for the gas‐furnace data of Box and Jenkins. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

15.
To forecast realized volatility, this paper introduces a multiplicative error model that incorporates heterogeneous components: weekly and monthly realized volatility measures. While the model captures the long‐memory property, estimation simply proceeds using quasi‐maximum likelihood estimation. This paper investigates its forecasting ability using the realized kernels of 34 different assets provided by the Oxford‐Man Institute's Realized Library. The model outperforms benchmark models such as ARFIMA, HAR, Log‐HAR and HEAVY‐RM in within‐sample fitting and out‐of‐sample (1‐, 10‐ and 22‐step) forecasts. It performed best in both pointwise and cumulative comparisons of multi‐step‐ahead forecasts, regardless of loss function (QLIKE or MSE). Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

16.
Model‐based SKU‐level forecasts are often adjusted by experts. In this paper we propose a statistical methodology to test whether these expert forecasts improve on model forecasts. Application of the methodology to a very large database concerning experts in 35 countries who adjust SKU‐level forecasts for pharmaceutical products in seven distinct categories leads to the general conclusion that expert forecasts are equally good at best, but are more often worse than model‐based forecasts. We explore whether this is due to experts putting too much weight on their contribution, and this indeed turns out to be the case. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper we propose Granger (non‐)causality tests based on a VAR model allowing for time‐varying coefficients. The functional form of the time‐varying coefficients is a logistic smooth transition autoregressive (LSTAR) model using time as the transition variable. The model allows for testing Granger non‐causality when the VAR is subject to a smooth break in the coefficients of the Granger causal variables. The proposed test then is applied to the money–output relationship using quarterly US data for the period 1952:2–2002:4. We find that causality from money to output becomes stronger after 1978:4 and the model is shown to have a good out‐of‐sample forecasting performance for output relative to a linear VAR model. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

18.
Do long‐run equilibrium relations suggested by economic theory help to improve the forecasting performance of a cointegrated vector error correction model (VECM)? In this paper we try to answer this question in the context of a two‐country model developed for the Canadian and US economies. We compare the forecasting performance of the exactly identified cointegrated VECMs to the performance of the over‐identified VECMs with the long‐run theory restrictions imposed. We allow for model uncertainty and conduct this comparison for every possible combination of the cointegration ranks of the Canadian and US models. We show that the over‐identified structural cointegrated models generally outperform the exactly identified models in forecasting Canadian macroeconomic variables. We also show that the pooled forecasts generated from the over‐identified models beat most of the individual exactly identified and over‐identified models as well as the VARs in levels and in differences. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

19.
In this paper, we put dynamic stochastic general equilibrium DSGE forecasts in competition with factor forecasts. We focus on these two models since they represent nicely the two opposing forecasting philosophies. The DSGE model on the one hand has a strong theoretical economic background; the factor model on the other hand is mainly data‐driven. We show that incorporating a large information set using factor analysis can indeed improve the short‐horizon predictive ability, as claimed by many researchers. The micro‐founded DSGE model can provide reasonable forecasts for US inflation, especially with growing forecast horizons. To a certain extent, our results are consistent with the prevailing view that simple time series models should be used in short‐horizon forecasting and structural models should be used in long‐horizon forecasting. Our paper compares both state‐of‐the‐art data‐driven and theory‐based modelling in a rigorous manner. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

20.
In this paper we compare the in‐sample fit and out‐of‐sample forecasting performance of no‐arbitrage quadratic, essentially affine and dynamic Nelson–Siegel term structure models. In total, 11 model variants are evaluated, comprising five quadratic, four affine and two Nelson–Siegel models. Recursive re‐estimation and out‐of‐sample 1‐, 6‐ and 12‐month‐ahead forecasts are generated and evaluated using monthly US data for yields observed at maturities of 1, 6, 12, 24, 60 and 120 months. Our results indicate that quadratic models provide the best in‐sample fit, while the best out‐of‐sample performance is generated by three‐factor affine models and the dynamic Nelson–Siegel model variants. Statistical tests fail to identify one single best forecasting model class. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号