首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We introduce a parameter-driven, state-space model for binary time series data. The model is based on a state process with a binomial-beta dynamics, which has a Markov, endogenous switching regime representation. The model allows for recursive prediction and filtering formulas with extremely low computational cost, and hence avoids the use of computational intensive simulation-based filtering algorithms. Case studies illustrate the advantage of our model over popular intensity-based observation-driven models, both in terms of fit and out-of-sample forecast.  相似文献   

2.
Recently, support vector machine (SVM), a novel artificial neural network (ANN), has been successfully used for financial forecasting. This paper deals with the application of SVM in volatility forecasting under the GARCH framework, the performance of which is compared with simple moving average, standard GARCH, nonlinear EGARCH and traditional ANN‐GARCH models by using two evaluation measures and robust Diebold–Mariano tests. The real data used in this study are daily GBP exchange rates and NYSE composite index. Empirical results from both simulation and real data reveal that, under a recursive forecasting scheme, SVM‐GARCH models significantly outperform the competing models in most situations of one‐period‐ahead volatility forecasting, which confirms the theoretical advantage of SVM. The standard GARCH model also performs well in the case of normality and large sample size, while EGARCH model is good at forecasting volatility under the high skewed distribution. The sensitivity analysis to choose SVM parameters and cross‐validation to determine the stopping point of the recurrent SVM procedure are also examined in this study. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

3.
This paper examines the relationship between stock prices and commodity prices and whether this can be used to forecast stock returns. As both prices are linked to expected future economic performance they should exhibit a long‐run relationship. Moreover, changes in sentiment towards commodity investing may affect the nature of the response to disequilibrium. Results support cointegration between stock and commodity prices, while Bai–Perron tests identify breaks in the forecast regression. Forecasts are computed using a standard fixed (static) in‐sample/out‐of‐sample approach and by both recursive and rolling regressions, which incorporate the effects of changing forecast parameter values. A range of model specifications and forecast metrics are used. The historical mean model outperforms the forecast models in both the static and recursive approaches. However, in the rolling forecasts, those models that incorporate information from the long‐run stock price/commodity price relationship outperform both the historical mean and other forecast models. Of note, the historical mean still performs relatively well compared to standard forecast models that include the dividend yield and short‐term interest rates but not the stock/commodity price ratio. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

4.
The increasing amount of attention paid to longevity risk and funding for old age has created the need for precise mortality models and accurate future mortality forecasts. Orthogonal polynomials have been widely used in technical fields and there have also been applications in mortality modeling. In this paper we adopt a flexible functional form approach using two‐dimensional Legendre orthogonal polynomials to fit and forecast mortality rates. Unlike some of the existing mortality models in the literature, the model we propose does not impose any restrictions on the age, time or cohort structure of the data and thus allows for different model designs for different countries' mortality experience. We conduct an empirical study using male mortality data from a range of developed countries and explore the possibility of using age–time effects to capture cohort effects in the underlying mortality data. It is found that, for some countries, cohort dummies still need to be incorporated into the model. Moreover, when comparing the proposed model with well‐known mortality models in the literature, we find that our model provides comparable fitting but with a much smaller number of parameters. Based on 5‐year‐ahead mortality forecasts, it can be concluded that the proposed model improves the overall accuracy of the future mortality projection. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

5.
Auditors must assess their clients' ability to function as a going concern for at least the year following the financial statement date. The audit profession has been severely criticized for failure to ‘blow the whistle’ in numerous highly visible bankruptcies that occurred shortly after unmodified audit opinions were issued. Financial distress indicators examined in this study are one mechanism for making such assessments. This study measures and compares the predictive accuracy of an easily implemented two‐variable bankruptcy model originally developed using recursive partitioning on an equally proportioned data set of 202 firms. In this study, we test the predictive accuracy of this model, as well as previously developed logit and neural network models, using a realistically proportioned set of 14,212 firms' financial data covering the period 1981–1990. The previously developed recursive partitioning model had an overall accuracy for all firms ranging from 95 to 97% which outperformed both the logit model at 93 to 94% and the neural network model at 86 to 91%. The recursive partitioning model predicted the bankrupt firms with 33–58% accuracy. A sensitivity analysis of recursive partitioning cutting points indicated that a newly specified model could achieve an all firm and a bankrupt firm predictive accuracy of approximately 85%. Auditors will be interested in the Type I and Type II error tradeoffs revealed in a detailed sensitivity table for this easily implemented model. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

6.
We utilize mixed‐frequency factor‐MIDAS models for the purpose of carrying out backcasting, nowcasting, and forecasting experiments using real‐time data. We also introduce a new real‐time Korean GDP dataset, which is the focus of our experiments. The methodology that we utilize involves first estimating common latent factors (i.e., diffusion indices) from 190 monthly macroeconomic and financial series using various estimation strategies. These factors are then included, along with standard variables measured at multiple different frequencies, in various factor‐MIDAS prediction models. Our key empirical findings as follows. (i) When using real‐time data, factor‐MIDAS prediction models outperform various linear benchmark models. Interestingly, the “MSFE‐best” MIDAS models contain no autoregressive (AR) lag terms when backcasting and nowcasting. AR terms only begin to play a role in “true” forecasting contexts. (ii) Models that utilize only one or two factors are “MSFE‐best” at all forecasting horizons, but not at any backcasting and nowcasting horizons. In these latter contexts, much more heavily parametrized models with many factors are preferred. (iii) Real‐time data are crucial for forecasting Korean gross domestic product, and the use of “first available” versus “most recent” data “strongly” affects model selection and performance. (iv) Recursively estimated models are almost always “MSFE‐best,” and models estimated using autoregressive interpolation dominate those estimated using other interpolation methods. (v) Factors estimated using recursive principal component estimation methods have more predictive content than those estimated using a variety of other (more sophisticated) approaches. This result is particularly prevalent for our “MSFE‐best” factor‐MIDAS models, across virtually all forecast horizons, estimation schemes, and data vintages that are analyzed.  相似文献   

7.
The three basic modelling approaches used to explain forest fire behaviour are theoretically, laboratory or empirically based. Results of all three approaches are reviewed, but it is noted that only the laboratory- and empirically based models have led to forecasting techniques that are in widespread use. These are the Rothermel model and the McArthur meters, respectively. Field tests designed to test the performance of these operational models were carried out in tropical grasslands. A preliminary analysis indicated that the Rothermel model overpredicted spread rates while the McArthur model underpredicted. To improve the forecast of bushfire rate of spread available to operational firefighting crews it is suggested that a time-variable parameter (TYP) recursive least squares algorithm can be used to assign weights to the respective models, with the weights recursively updated as information on fire-front location becomes available. Results of this methodology when applied to US Grasslands fire experiment data indicate that the quality of the input combined with a priori knowledge of the performance of the candidate models plays an important role in the performance of the TVP algorithm. With high-quality input data, the Rothermel model on its own outperformed the TVP algorithm, but with slightly inferior data both approaches were comparable. Though the use of all available data in a multiple linear regression produces a lower sum of squared errors than the recursive, time-variable weighting approach, or that of any single model, the uncertainties of data input and consequent changes in weighting coefficients during operational conditions suggest the use of the TVP algorithm approach.  相似文献   

8.
An underlying assumption in Multivariate Singular Spectrum Analysis (MSSA) is that the time series are governed by a linear recurrent continuation. However, in the presence of a structural break, multiple series can be transferred from one homogeneous state to another over a comparatively short time breaking this assumption. As a consequence, forecasting performance can degrade significantly. In this paper, we propose a state-dependent model to incorporate the movement of states in the linear recurrent formula called a State-Dependent Multivariate SSA (SD-MSSA) model. The proposed model is examined for its reliability in the presence of a structural break by conducting an empirical analysis covering both synthetic and real data. Comparison with standard MSSA, BVAR, VAR and VECM models shows the proposed model outperforms all three models significantly.  相似文献   

9.
10.
We use real‐time macroeconomic variables and combination forecasts with both time‐varying weights and equal weights to forecast inflation in the USA. The combination forecasts compare three sets of commonly used time‐varying coefficient autoregressive models: Gaussian distributed errors, errors with stochastic volatility, and errors with moving average stochastic volatility. Both point forecasts and density forecasts suggest that models combined by equal weights do not produce worse forecasts than those with time‐varying weights. We also find that variable selection, the allowance of time‐varying lag length choice, and the stochastic volatility specification significantly improve forecast performance over standard benchmarks. Finally, when compared with the Survey of Professional Forecasters, the results of the best combination model are found to be highly competitive during the 2007/08 financial crisis.  相似文献   

11.
This paper deals with the nonlinear modeling and forecasting of the dollar–sterling and franc–sterling real exchange rates using long spans of data. Our contribution is threefold. First, we provide significant evidence of smooth transition dynamics in the series by employing a battery of recently developed in‐sample statistical tests. Second, we investigate the small‐sample properties of several evaluation measures for comparing recursive forecasts when one of the competing models is nonlinear. Finally, we run a forecasting race for the post‐Bretton Woods era between the nonlinear real exchange rate model, the random walk, and the linear autoregressive model. The nonlinear model outperforms all rival models in the dollar–sterling case but cannot beat the linear autoregressive in the franc–sterling. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

12.
Compared with point forecasting, interval forecasting is believed to be more effective and helpful in decision making, as it provides more information about the data generation process. Based on the well-established “linear and nonlinear” modeling framework, a hybrid model is proposed by coupling the vector error correction model (VECM) with artificial intelligence models which consider the cointegration relationship between the lower and upper bounds (Coin-AIs). VECM is first employed to fit the original time series with the residual error series modeled by Coin-AIs. Using pork price as a research sample, the empirical results statistically confirm the superiority of the proposed VECM-CoinAIs over other competing models, which include six single models and six hybrid models. This result suggests that considering the cointegration relationship is a workable direction for improving the forecast performance of the interval-valued time series. Moreover, with a reasonable data transformation process, interval forecasting is proven to be more accurate than point forecasting.  相似文献   

13.
The forecasting capabilities of feed‐forward neural network (FFNN) models are compared to those of other competing time series models by carrying out forecasting experiments. As demonstrated by the detailed forecasting results for the Canadian lynx data set, FFNN models perform very well, especially when the series contains nonlinear and non‐Gaussian characteristics. To compare the forecasting accuracy of a FFNN model with an alternative model, Pitman's test is employed to ascertain if one model forecasts significantly better than another when generating one‐step‐ahead forecasts. Moreover, the residual‐fit spread plot is utilized in a novel fashion in this paper to compare visually out‐of‐sample forecasts of two alternative forecasting models. Finally, forecasting findings on the lynx data are used to explain under what conditions one would expect FFNN models to furnish reliable and accurate forecasts. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

14.
Structural symmetry is observed in the majority of fundamental protein folds and gene duplication and fusion evolutionary processes are postulated to be responsible. However, convergent evolution leading to structural symmetry has also been proposed; additionally, there is debate regarding the extent to which exact primary structure symmetry is compatible with efficient protein folding. Issues of symmetry in protein evolution directly impact strategies for de novo protein design as symmetry can substantially simplify the design process. Additionally, when considering gene duplication and fusion in protein evolution, there are two competing models: “emergent architecture” and “conserved architecture”. Recent experimental work has shed light on both the evolutionary process leading to symmetric protein folds as well as the ability of symmetric primary structure to efficiently fold. Such studies largely support a “conserved architecture” evolutionary model, suggesting that complex protein architecture was an early evolutionary achievement involving oligomerization of smaller polypeptides.  相似文献   

15.
Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation.  相似文献   

16.
The power of Chow, linear, predictive failure and cusum of squares tests to detect structural change is compared in a two-variable random walk model and a once-for-all parameter shift model. In each case the linear test has greatest power, followed by the Chow test. It is suggested that the linear test be used as the basic general test for structural change in time series data, and tests of forecasting performance be confined to the last few observations. Analysis of recursive residuals and recursive parameter estimates should be regarded as forms of exploratory data analysis and tools for understanding discrepancies with previous results rather than a basis for formal tests of structural change.  相似文献   

17.
In time-series analysis, a model is rarely pre-specified but rather is typically formulated in an iterative, interactive way using the given time-series data. Unfortunately the properties of the fitted model, and the forecasts from it, are generally calculated as if the model were known in the first place. This is theoretically incorrect, as least squares theory, for example, does not apply when the same data are used to formulates and fit a model. Ignoring prior model selection leads to biases, not only in estimates of model parameters but also in the subsequent construction of prediction intervals. The latter are typically too narrow, partly because they do not allow for model uncertainty. Empirical results also suggest that more complicated models tend to give a better fit but poorer ex-ante forecasts. The reasons behind these phenomena are reviewed. When comparing different forecasting models, the BIC is preferred to the AIC for identifying a model on the basis of within-sample fit, but out-of-sample forecasting accuracy provides the real test. Alternative approaches to forecasting, which avoid conditioning on a single model, include Bayesian model averaging and using a forecasting method which is not model-based but which is designed to be adaptable and robust.  相似文献   

18.
When evaluating the launch of a new product or service, forecasts of the diffusion path and the effects of the marketing mix are critically important. Currently no unified framework exists to provide guidelines on the inclusion and specification of marketing mix variables into models of innovation diffusion. The objective of this research is to examine empirically the role of prices in diffusion models, in order to establish whether price can be incorporated effectively into the simpler time-series models. Unlike existing empirical research which examines the models' fit to historical data, we examine the predictive validity of alternative models. Only if the incorporation of prices improves the predictive performance of diffusion models can it be argued that these models have validity. A series of diffusion models which include prices are compared against a number of well-accepted diffusion models, including the Bass (1969) model, and more recently developed ‘flexible’ diffusion models. For short data series and long-lead time forecasting, the situation typical of practical situations, price rarely added to the forecasting capability of simpler time-series models. Copyright © 1998 John Wiley & Sons, Ltd.  相似文献   

19.
We look at the problem of forecasting time series which are not normally distributed. An overall approach is suggested which works both on simulated data and on real data sets. The idea is intuitively attractive and has the considerable advantage that it can readily be understood by non-specialists. Our approach is based on ARMA methodology and our models are estimated via a likelihood procedure which takes into account the non-normality of the data. We examine in some detail the circumstances in which taking explicit account of the nonnormality improves the forecasting process in a significant way. Results from several simulated and real series are included.  相似文献   

20.
In the present study we examine the predictive power of disagreement amongst forecasters. In our empirical work, we find that in some situations this variable can signal upcoming structural and temporal changes in an economic process and in the predictive power of the survey forecasts. We examine a variety of macroeconomic variables, and we use different measurements for the degree of disagreement, together with measures for location of the survey data and autoregressive components. Forecasts from simple linear models and forecasts from Markov regime‐switching models with constant and with time‐varying transition probabilities are constructed in real time and compared on forecast accuracy. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号