首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A procedure for estimating state space models for multivariate distributed lag processes is described. It involves singular value decomposition techniques and yields an internally balanced state space representation which has attractive properties. Following the specifications of a forecasting competition, the approach is applied to generate ex-post forecasts for US real GNP growth rates. The forecasts of the estimated state space model are compared to those of twelve econometric models and an ARIMA model.  相似文献   

2.
Most non‐linear techniques give good in‐sample fits to exchange rate data but are usually outperformed by random walks or random walks with drift when used for out‐of‐sample forecasting. In the case of regime‐switching models it is possible to understand why forecasts based on the true model can have higher mean squared error than those of a random walk or random walk with drift. In this paper we provide some analytical results for the case of a simple switching model, the segmented trend model. It requires only a small misclassification, when forecasting which regime the world will be in, to lose any advantage from knowing the correct model specification. To illustrate this we discuss some results for the DM/dollar exchange rate. We conjecture that the forecasting result is more general and describes limitations to the use of switching models for forecasting. This result has two implications. First, it questions the leading role of the random walk hypothesis for the spot exchange rate. Second, it suggests that the mean square error is not an appropriate way to evaluate forecast performance for non‐linear models. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

3.
针对NPD项目为了适应市场环境而不断改变自身要素状态的适应行为,借鉴适应度景观理论来揭示NPD项目的运行机制。在介绍适应度景观理论的基础上,阐述了NPD项目运行的适应度景观内涵。运用NK模型,对适应度景观建模并进行了数值仿真。研究发现,成功NPD项目的运行过程分为随机性游走、适应性游走和适应性跳跃三个阶段。在随机性游走阶段,适应度之间具有明显的波动性,且随要素间关系数量的增加而愈加显著。在适应性游走阶段,NPD项目的适应能力得到了渐进式提升,却容易陷入局部最优陷阱。在适应性跳跃阶段,成功的NPD项目通过将短跳和长跳相结合,即保证不断向更高的适应水平发展,又可以避免局限在局部高峰上,从而不断获得成功与发展。  相似文献   

4.
This paper presents an autoregressive fractionally integrated moving‐average (ARFIMA) model of nominal exchange rates and compares its forecasting capability with the monetary structural models and the random walk model. Monthly observations are used for Canada, France, Germany, Italy, Japan and the United Kingdom for the period of April 1973 through December 1998. The estimation method is Sowell's (1992) exact maximum likelihood estimation. The forecasting accuracy of the long‐memory model is formally compared to the random walk and the monetary models, using the recently developed Harvey, Leybourne and Newbold (1997) test statistics. The results show that the long‐memory model is more efficient than the random walk model in steps‐ahead forecasts beyond 1 month for most currencies and more efficient than the monetary models in multi‐step‐ahead forecasts. This new finding strongly suggests that the long‐memory model of nominal exchange rates be studied as a viable alternative to the conventional models. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

5.
Econometric prediction accuracy for personal income forecasts is examined for a region of the United States. Previously published regional structural equation model (RSEM) forecasts exist ex ante for the state of New Mexico and its three largest metropolitan statistical areas: Albuquerque, Las Cruces and Santa Fe. Quarterly data between 1983 and 2000 are utilized at the state level. For Albuquerque, annual data from 1983 through 1999 are used. For Las Cruces and Santa Fe, annual data from 1990 through 1999 are employed. Univariate time series, vector autoregressions and random walks are used as the comparison criteria against structural equation simulations. Results indicate that ex ante RSEM forecasts achieved higher accuracy than those simulations associated with univariate ARIMA and random walk benchmarks for the state of New Mexico. The track records of the structural econometric models for Albuquerque, Las Cruces and Santa Fe are less impressive. In some cases, VAR benchmarks prove more reliable than RSEM income forecasts. In other cases, the RSEM forecasts are less accurate than random walk alternatives. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

6.
This paper employs a non‐parametric method to forecast high‐frequency Canadian/US dollar exchange rate. The introduction of a microstructure variable, order flow, substantially improves the predictive power of both linear and non‐linear models. The non‐linear models outperform random walk and linear models based on a number of recursive out‐of‐sample forecasts. Two main criteria that are applied to evaluate model performance are root mean squared error (RMSE) and the ability to predict the direction of exchange rate moves. The artificial neural network (ANN) model is consistently better in RMSE to random walk and linear models for the various out‐of‐sample set sizes. Moreover, ANN performs better than other models in terms of percentage of correctly predicted exchange rate changes. The empirical results suggest that optimal ANN architecture is superior to random walk and any linear competing model for high‐frequency exchange rate forecasting. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
This paper analyses the long-run relationship between gold and silver prices. The three main questions addressed are: the influence of a large bubble from 1979:9 to 1980:3 on the cointegration relationship, the extent to which by including error-correction terms in a non-linear way we can beat the random walk model out-of-sample, and the existence of a strong simultaneous relationship between the rates of return of gold and silver. Different efficient single-equation estimation techniques are required for each of the three questions and this is explained within a simple bivariate cointegrating system. With monthly data from 1971 to 1990, it is found that cointegration could have occurred during some periods and especially during the bubble and post-bubble periods. However, dummy variables for the intercept of the long-run relationships are needed during the full sample. For the price of gold the non-linear models perform better than the random walk in-sample and out-of-sample. In-sample non-linear models for the price of silver perform better than the random walk but this predictive capacity is lost out-of-sample, mainly due to the structural change that occurs (reduction) in the variance of the out-of-sample models. The in-sample and out-of-sample predictive capacity of the non-linear models is reduced when the variables are in logs. Clear and strong evidence is found for a simultaneous relationship between the rates of return of gold and silver. In the three type of relationships that we have analysed between the prices of gold and silver, the dependence is less out-of-sample, possibly meaning that the two markets are becoming separated. © 1998 John Wiley & Sons, Ltd.  相似文献   

8.
Measurement errors can have dramatic impact on the outcome of empirical analysis. In this article we quantify the effects that they can have on predictions generated from ARMA processes. Lower and upper bounds are derived for differences in minimum mean squared prediction errors (MMSE) for forecasts generated from data with and without errors. The impact that measurement errors have on MMSE and other relative measures of forecast accuracy are presented for a variety of model structures and parameterizations. Based on these results the need to set up the models in state space form to extract the signal component appears to depend upon whether processes are nearly non‐invertible or non‐stationary or whether the noise‐to‐signal ratio is very high. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

9.
This study empirically examines the role of macroeconomic and stock market variables in the dynamic Nelson–Siegel framework with the purpose of fitting and forecasting the term structure of interest rate on the Japanese government bond market. The Nelson–Siegel type models in state‐space framework considerably outperform the benchmark simple time series forecast models such as an AR(1) and a random walk. The yields‐macro model incorporating macroeconomic factors leads to a better in‐sample fit of the term structure than the yields‐only model. The out‐of‐sample predictability of the former for short‐horizon forecasts is superior to the latter for all maturities examined in this study, and for longer horizons the former is still compatible to the latter. Inclusion of macroeconomic factors can dramatically reduce the autocorrelation of forecast errors, which has been a common phenomenon of statistical analysis in previous term structure models. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
This paper shows that a constrained autoregressive model that assigns linearly decreasing weights to past observations of a stationary time series has important links to the variance ratio methodology and trend stationary model. It is demonstrated that the proposed autoregressive model is asymptotically related to the variance ratio through the weighting schedules that these two tools use. It is also demonstrated that under a trend stationary time series process the proposed autoregressive model approaches a trend stationary model when the memory of the autoregressive model is increased. These links create a theoretical foundation for tests that confront the random walk model simultaneously against a trend stationary and a variety of short‐ and long‐memory autoregressive alternatives. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
In examining stochastic models for commodity prices, central questions often revolve around time‐varying trend, stochastic convenience yield and volatility, and mean reversion. This paper seeks to assess and compare alternative approaches to modelling these effects, with focus on forecast performance. Three specifications are considered: (i) random‐walk models with GARCH and normal or Student‐t innovations; (ii) Poisson‐based jump‐diffusion models with GARCH and normal or Student‐t innovations; and (iii) mean‐reverting models that allow for uncertainty in equilibrium price. Our empirical application makes use of aluminium spot and futures price series at daily and weekly frequencies. Results show: (i) models with stochastic convenience yield outperform all other competing models, and for all forecast horizons; (ii) the use of futures prices does not always yield lower forecast error values compared to the use of spot prices; and (iii) within the class of (G)ARCH random‐walk models, no model uniformly dominates the other. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
At what forecast horizon is one time series more predictable than another? This paper applies the Diebold–Kilian conditional predictability measure to assess the out‐of‐sample performance of three alternative models of daily GBP/USD and DEM/USD exchange rate returns. Predictability is defined as a non‐linear statistic of a model's relative expected losses at short and long forecast horizons, allowing flexible choice of both the estimation procedure and loss function. The long horizon is set to 2 weeks and one month ahead and forecasts evaluated according to MSE loss. Bootstrap methodology is used to estimate the data's conditional predictability using GARCH models. This is then compared to predictability under a random walk and a model using the prediction bias in uncovered interest parity (UIP). We find that both exchange rates are less predictable using GARCH than using a random walk, but they are more predictable using UIP than a random walk. Predictability using GARCH is relatively higher for the 2‐weeks‐than for the 1‐month long forecast horizon. Comparing the results using a random walk to that using UIP reveals ‘pockets’ of predictability, that is, particular short horizons for which predictability using the random walk exceeds that using UIP, or vice versa. Overall, GBP/USD returns appear more predictable than DEM/USD returns at short horizons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

13.
Several authors (King and Rebelo, 1993; Cogley and Nason, 1995) have questioned the use of exponentially weighted moving average filters such as the Hodrick–Prescott filter in decomposing a series into a trend and cycle, claiming that they lead to the observation of spurious or induced cycles and to misinterpretation of stylized facts. However, little has been done to propose different methods of estimation or other ways of defining trend extraction. This paper has two main contributions. First, we suggest that the decomposition between the trend and cycle has not been done in an appropriate way. Second, we argue for a general to specific approach based on a more general filter, the stochastic trend model, that allows us to estimate all the parameters of the model rather than fixing them arbitrarily, as is done with mainly of the commonly used filters. We illustrate the properties of the proposed technique relative to the conventional ones by employing a Monte Carlo study. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

14.
This paper applies the Kalman filtering procedure to estimate persistent and transitory noise components of accounting earnings. Designating the transitory noise component separately (under a label such as extraordinary items) in financial reports should help users predict future earnings. If a firm has no foreknowledge of future earnings, managers can apply a filter to a firm's accounting earnings more efficiently than an interested user. If management has foreknowledge of earnings, application of a filtering algorithm can result in smoothed variables that convey information otherwise not available to users. Application of a filtering algorithm to a sample of firms revealed that a substantial number of firms exhibited a significant transitory noise component of earnings. Also, for those firms whose earnings exhibited a significant departure from the random walk process, the paper shows that filtering can be fruitfully applied to improve predictive ability.  相似文献   

15.
A long‐standing puzzle to financial economists is the difficulty of outperforming the benchmark random walk model in out‐of‐sample contests. Using data from the USA over the period of 1872–2007, this paper re‐examines the out‐of‐sample predictability of real stock prices based on price–dividend (PD) ratios. The current research focuses on the significance of the time‐varying mean and nonlinear dynamics of PD ratios in the empirical analysis. Empirical results support the proposed nonlinear model of the PD ratio and the stationarity of the trend‐adjusted PD ratio. Furthermore, this paper rejects the non‐predictability hypothesis of stock prices statistically based on in‐ and out‐of‐sample tests and economically based on the criteria of expected real return per unit of risk. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
We compare the accuracy of vector autoregressive (VAR), restricted vector autoregressive (RVAR), Bayesian vector autoregressive (BVAR), vector error correction (VEC) and Bayesian error correction (BVEC) models in forecasting the exchange rates of five Central and Eastern European currencies (Czech Koruna, Hungarian Forint, Slovak Koruna, Slovenian Tolar and Polish Zloty) against the US Dollar and the Euro. Although these models tend to outperform the random walk model for long‐term predictions (6 months ahead and beyond), even the best models in terms of average prediction error fail to reject the test of equality of forecasting accuracy against the random walk model in short‐term predictions. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

17.
This paper reports the results of studies concerning the accuracy and efficiency of time-series extrapolation decisions made with the assistance of an interactive graphical tool called GRAFFECT. The tool facilitates the decomposition of the extrapolation task by permitting the serial decomposition of the cue data as the task proceeds. GRAFFECT uses an interactive graphical interface controlled substantially with the use of a mouse. The extrapolation task is divided into the following: (1) trend modelling and extrapolation, (2) seasonal pattern modelling, and (3) extrapolation from the noise residual series. As each component is modelled its effect is stored and the information is washed out of the cue series. The ultimate forecast is produced by automatic recomposition of the judgementally determined components. The results show a significant improvement in forecast accuracy over unaided judgment, resulting in a subjective extrapolation that betters deseasonalized single exponential smoothing.  相似文献   

18.
Building on recent and growing evidence that geographic location influences information diffusion, this paper examines the relation between firm's location and the predictability of stock returns. We hypothesize that returns on a portfolio composed of firms located in central areas are more likely to follow a random walk than returns on a portfolio composed of firms located in remote areas. Using a battery of variance ratio tests, we find strong and robust support for our prediction. In particular, we show that the returns on a portfolio composed of the 500 largest urban firms follow a random walk; however, all variance ratio tests reject the random walk hypothesis for a portfolio that includes the 500 largest rural firms. Our results are robust to alternative definitions of firm's location and portfolio formation.  相似文献   

19.
Given a structural time-series model specified at a basic time interval, this paper deals with the problems of forecasting efficiency and estimation accuracy generated when the data are collected at a timing interval which is a multiple of the time unit chosen to build the basic model. Results are presented for the simplest structural models, the trend plus error models, under the assumption that the parameters of the model are known. It is shown that the gains in forecasting efficiency and estimation accuracy for having data at finer intervals are considerable for both stock and flow variables with only one exception. No gain in forecasting efficiency is achieved in the case of a stock series that follows a random walk.  相似文献   

20.
We use dynamic factors and neural network models to identify current and past states (instead of future) of the US business cycle. In the first step, we reduce noise in data by using a moving average filter. Dynamic factors are then extracted from a large-scale data set consisted of more than 100 variables. In the last step, these dynamic factors are fed into the neural network model for predicting business cycle regimes. We show that our proposed method follows US business cycle regimes quite accurately in-sample and out-of-sample without taking account of the historical data availability. Our results also indicate that noise reduction is an important step for business cycle prediction. Furthermore, using pseudo real time and vintage data, we show that our neural network model identifies turning points quite accurately and very quickly in real time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号