首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We propose a wavelet neural network (neuro‐wavelet) model for the short‐term forecast of stock returns from high‐frequency financial data. The proposed hybrid model combines the capability of wavelets and neural networks to capture non‐stationary nonlinear attributes embedded in financial time series. A comparison study was performed on the predictive power of two econometric models and four recurrent neural network topologies. Several statistical measures were applied to the predictions and standard errors to evaluate the performance of all models. A Jordan net that used as input the coefficients resulting from a non‐decimated wavelet‐based multi‐resolution decomposition of an exogenous signal showed a consistent superior forecasting performance. Reasonable forecasting accuracy for the one‐, three‐ and five step‐ahead horizons was achieved by the proposed model. The procedure used to build the neuro‐wavelet model is reusable and can be applied to any high‐frequency financial series to specify the model characteristics associated with that particular series. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

2.
In this paper we introduce a new testing procedure for evaluating the rationality of fixed‐event forecasts based on a pseudo‐maximum likelihood estimator. The procedure is designed to be robust to departures in the normality assumption. A model is introduced to show that such departures are likely when forecasters experience a credibility loss when they make large changes to their forecasts. The test is illustrated using monthly fixed‐event forecasts produced by four UK institutions. Use of the robust test leads to the conclusion that certain forecasts are rational while use of the Gaussian‐based test implies that certain forecasts are irrational. The difference in the results is due to the nature of the underlying data. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

3.
A non‐linear dynamic model is introduced for multiplicative seasonal time series that follows and extends the X‐11 paradigm where the observed time series is a product of trend, seasonal and irregular factors. A selection of standard seasonal and trend component models used in additive dynamic time series models are adapted for the multiplicative framework and a non‐linear filtering procedure is proposed. The results are illustrated and compared to X‐11 and log‐additive models using real data. In particular it is shown that the new procedures do not suffer from the trend bias present in log‐additive models. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

4.
In this article, we propose a regression model for sparse high‐dimensional data from aggregated store‐level sales data. The modeling procedure includes two sub‐models of topic model and hierarchical factor regressions. These are applied in sequence to accommodate high dimensionality and sparseness and facilitate managerial interpretation. First, the topic model is applied to aggregated data to decompose the daily aggregated sales volume of a product into sub‐sales for several topics by allocating each unit sale (“word” in text analysis) in a day (“document”) into a topic based on joint‐purchase information. This stage reduces the dimensionality of data inside topics because the topic distribution is nonuniform and product sales are mostly allocated into smaller numbers of topics. Next, the market response regression model for the topic is estimated from information about items in the same topic. The hierarchical factor regression model we introduce, based on canonical correlation analysis for original high‐dimensional sample spaces, further reduces the dimensionality within topics. Feature selection is then performed on the basis of the credible interval of the parameters' posterior density. Empirical results show that (i) our model allows managerial implications from topic‐wise market responses according to the particular context, and (ii) it performs better than do conventional category regressions in both in‐sample and out‐of‐sample forecasts.  相似文献   

5.
This article proposes intraday high‐frequency risk (HFR) measures for market risk in the case of irregularly spaced high‐frequency data. In this context, we distinguish three concepts of value‐at‐risk (VaR): the total VaR, the marginal (or per‐time‐unit) VaR and the instantaneous VaR. Since the market risk is obviously related to the duration between two consecutive trades, these measures are completed with a duration risk measure, i.e. the time‐at‐risk (TaR). We propose a forecasting procedure for VaR and TaR for each trade or other market microstructure event. Subsequently, we perform a backtesting procedure specifically designed to assess the validity of the VaR and TaR forecasts on irregularly spaced data. The performance of the HFR measure is illustrated in an empirical application for two stocks (Bank of America and Microsoft) and an exchange‐traded fund based on Standard & Poor's 500 index. We show that the intraday HFR forecasts capture accurately the volatility and duration dynamics for these three assets. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

6.
Travel time is a good operational measure of the effectiveness of transportation systems. The ability to accurately predict motorway and arterial travel times is a critical component for many intelligent transportation systems (ITS) applications. Advanced traffic data collection systems using inductive loop detectors and video cameras have been installed, particularly for motorway networks. An inductive loop can provide traffic flow at its location. Video cameras with image‐processing software, e.g. Automatic Number Plate Recognition (ANPR) software, are able to provide travel time of a road section. This research developed a dynamic linear model (DLM) model to forecast short‐term travel time using both loop and ANPR data. The DLM approach was tested on three motorway sections in southern England. Overall, the model produced good prediction results, albeit large prediction errors occurred at congested traffic conditions due to the dynamic nature of traffic. This result indicated advantages of use of the both data sources. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
Case‐based reasoning (CBR) is considered a vital methodology in the current business forecasting area because of its simplicity, competitive performance with modern methods, and ease of pattern maintenance. Business failure prediction (BFP) is an effective tool that helps business people and entrepreneurs make more precise decisions in the current crisis. Using CBR as a basis for BFP can improve the tool's utility because CBR has the potential advantage in making predictions as well as suggestions compared with other methods. Recent studies indicate that an ensemble of various techniques has the possibility of improving the performance of predictive model. This research focuses on an early investigation on predicting business failure using a CBR ensemble (CBRE) forecasting method constructed from the use of random similarity functions (RSF), dubbed RSF‐based CBRE. Four issues are discussed: (i) the reasons for the use of RSF as the basis in the CBRE forecasting method for BFP; (ii) the means to construct the RSF‐based CBRE forecasting method for BFP; (iii) the empirical test on sensitivity of the RSF‐based CBRE to the number of member CBR predictors; and (iv) performance assessment of the ensemble forecasting method. Results of the RSF‐based CBRE forecasting method were statistically validated by comparing them with those of multivariate discriminant analysis, logistic regression, single CBR, and a linear support vector machine. The results from Chinese hotel BFP indicate that the RSF‐based CBRE forecasting method could significantly improve CBR's upper limit of predictive capability. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
A physically based model for ground‐level ozone forecasting is evaluated for Santiago, Chile. The model predicts the daily peak ozone concentration, with the daily rise of air temperature as input variable; weekends and rainy days appear as interventions. This model was used to analyse historical data, using the Linear Transfer Function/Finite Impulse Response (LTF/FIR) formalism; the Simultaneous Transfer Function (STF) method was used to analyse several monitoring stations together. Model evaluation showed a good forecasting performance across stations—for low and high ozone impacts—with power of detection (POD) values between 70 and 100%, Heidke's Skill Scores between 40% and 70% and low false alarm rates (FAR). The model consistently outperforms a pure persistence forecast. Model performance was not sensitive to different implementation options. The model performance degrades for two‐ and three‐days ahead forecast, but is still acceptable for the purpose of developing an environmental warning system at Santiago. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
Value‐at‐risk (VaR) is a standard measure of market risk in financial markets. This paper proposes a novel, adaptive and efficient method to forecast both volatility and VaR. Extending existing exponential smoothing as well as GARCH formulations, the method is motivated from an asymmetric Laplace distribution, where skewness and heavy tails in return distributions, and their potentially time‐varying nature, are taken into account. The proposed volatility equation also involves novel time‐varying dynamics. Back‐testing results illustrate that the proposed method offers a viable, and more accurate, though conservative, improvement in forecasting VaR compared to a range of popular alternatives. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

10.
This paper proposes a new mixed‐frequency approach to predict stock return volatilities out‐of‐sample. Based on the strategy of momentum of predictability (MoP), our mixed‐frequency approach has a model switching mechanism that switches between generalized autoregressive conditional heteroskedasticity (GARCH)‐class models that only use low‐frequency data and heterogeneous autoregressive models of realized volatility (HAR‐RV)‐type that only use high‐frequency data. The MoP model simply selects a forecast with relatively good past performance between the GARCH‐class and HAR‐RV‐type forecasts. The model confidence set (MCS) test shows that our MoP strategy significantly outperforms the competing models, which is robust to various settings. The MoP test shows that a relatively good recent past forecasting performance of the GARCH‐class or HAR‐RV‐type model is significantly associated with a relatively good current performance, supporting the success of the MoP model.  相似文献   

11.
More and more ensemble models are used to forecast business failure. It is generally known that the performance of an ensemble relies heavily on the diversity between each base classifier. To achieve diversity, this study uses kernel‐based fuzzy c‐means (KFCM) to organize firm samples and designs a hierarchical selective ensemble model for business failure prediction (BFP). First, three KFCM methods—Gaussian KFCM (GFCM), polynomial KFCM (PFCM), and Hyper‐tangent KFCM (HFCM)—are employed to partition the financial data set into three data sets. A neural network (NN) is then adopted as a basis classifier for BFP, and three sets, which are derived from three KFCM methods, are used to build three classifier pools. Next, classifiers are fused by the two‐layer hierarchical selective ensemble method. In the first layer, classifiers are ranked based on their prediction accuracy. The stepwise forward selection method is employed to selectively integrate classifiers according to their accuracy. In the second layer, three selective ensembles in the first layer are integrated again to acquire the final verdict. This study employs financial data from Chinese listed companies to conduct empirical research, and makes a comparative analysis with other ensemble models and all its component models. It is the conclusion that the two‐layer hierarchical selective ensemble is good at forecasting business failure.  相似文献   

12.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
This paper concentrates on comparing estimation and forecasting ability of quasi‐maximum likelihood (QML) and support vector machines (SVM) for financial data. The financial series are fitted into a family of asymmetric power ARCH (APARCH) models. As the skewness and kurtosis are common characteristics of the financial series, a skew‐t distributed innovation is assumed to model the fat tail and asymmetry. Prior research indicates that the QML estimator for the APARCH model is inefficient when the data distribution shows departure from normality, so the current paper utilizes the semi‐parametric‐based SVM method and shows that it is more efficient than the QML under the skewed Student's‐t distributed error. As the SVM is a kernel‐based technique, we further investigate its performance by applying separately a Gaussian kernel and a wavelet kernel. The results suggest that the SVM‐based method generally performs better than QML for both in‐sample and out‐of‐sample data. The outcomes also highlight the fact that the wavelet kernel outperforms the Gaussian kernel with lower forecasting error, better generation capability and more computation efficiency. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

14.
Exploring the Granger‐causation relationship is an important and interesting topic in the field of econometrics. In the traditional model we usually apply the short‐memory style to exhibit the relationship, but in practice there could be other different influence patterns. Besides the short‐memory relationship, Chen (2006) demonstrates a long‐memory relationship, in which a useful approach is provided for estimation where the time series are not necessarily fractionally co‐integrated. In that paper two different relationships (short‐memory and long‐memory relationship) are regarded whereby the influence flow is decayed by geometric, or cutting off, or harmonic sequences. However, it limits the model to the stationary relationship. This paper extends the influence flow to a non‐stationary relationship where the limitation is on ?0.5 ≤ d ≤ 1.0 and it can be used to detect whether the influence decays off (?0.5 ≤ d < 0.5) or is permanent (0.5 ≤ d ≤ 1.0). Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

15.
In this paper we propose Granger (non‐)causality tests based on a VAR model allowing for time‐varying coefficients. The functional form of the time‐varying coefficients is a logistic smooth transition autoregressive (LSTAR) model using time as the transition variable. The model allows for testing Granger non‐causality when the VAR is subject to a smooth break in the coefficients of the Granger causal variables. The proposed test then is applied to the money–output relationship using quarterly US data for the period 1952:2–2002:4. We find that causality from money to output becomes stronger after 1978:4 and the model is shown to have a good out‐of‐sample forecasting performance for output relative to a linear VAR model. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

16.
A modeling approach to real‐time forecasting that allows for data revisions is shown. In this approach, an observed time series is decomposed into stochastic trend, data revision, and observation noise in real time. It is assumed that the stochastic trend is defined such that its first difference is specified as an AR model, and that the data revision, obtained only for the latest part of the time series, is also specified as an AR model. The proposed method is applicable to the data set with one vintage. Empirical applications to real‐time forecasting of quarterly time series of US real GDP and its eight components are shown to illustrate the usefulness of the proposed approach. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

17.
We compare linear autoregressive (AR) models and self‐exciting threshold autoregressive (SETAR) models in terms of their point forecast performance, and their ability to characterize the uncertainty surrounding those forecasts, i.e. interval or density forecasts. A two‐regime SETAR process is used as the data‐generating process in an extensive set of Monte Carlo simulations, and we consider the discriminatory power of recently developed methods of forecast evaluation for different degrees of non‐linearity. We find that the interval and density evaluation methods are unlikely to show the linear model to be deficient on samples of the size typical for macroeconomic data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
We present a mixed‐frequency model for daily forecasts of euro area inflation. The model combines a monthly index of core inflation with daily data from financial markets; estimates are carried out with the MIDAS regression approach. The forecasting ability of the model in real time is compared with that of standard VARs and of daily quotes of economic derivatives on euro area inflation. We find that the inclusion of daily variables helps to reduce forecast errors with respect to models that consider only monthly variables. The mixed‐frequency model also displays superior predictive performance with respect to forecasts solely based on economic derivatives. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

19.
This paper examines small sample properties of alternative bias‐corrected bootstrap prediction regions for the vector autoregressive (VAR) model. Bias‐corrected bootstrap prediction regions are constructed by combining bias‐correction of VAR parameter estimators with the bootstrap procedure. The backward VAR model is used to bootstrap VAR forecasts conditionally on past observations. Bootstrap prediction regions based on asymptotic bias‐correction are compared with those based on bootstrap bias‐correction. Monte Carlo simulation results indicate that bootstrap prediction regions based on asymptotic bias‐correction show better small sample properties than those based on bootstrap bias‐correction for nearly all cases considered. The former provide accurate coverage properties in most cases, while the latter over‐estimate the future uncertainty. Overall, the percentile‐t bootstrap prediction region based on asymptotic bias‐correction is found to provide highly desirable small sample properties, outperforming its alternatives in nearly all cases. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

20.
The estimation of hurricane intensity evolution in some tropical and subtropical areas is a challenging problem. Indeed, the prevention and the quantification of possible damage provoked by destructive hurricanes are directly linked to this kind of prevision. For this purpose, hurricane derivatives have been recently issued by the Chicago Mercantile Exchange, based on the so‐called Carvill hurricane index. In our paper, we adopt a parametric homogeneous semi‐Markov approach. This model assumes that the lifespan of a hurricane can be described as a semi‐Markov process and also it allows the more realistic assumption of time event dependence to be taken into consideration. The elapsed time between two consecutive events (waiting time distributions) is modeled through a best‐fitting procedure on empirical data. We then determine the transition probabilities and so‐called crossing states probabilities. We conclude with a Monte Carlo simulation and the model is validated through a large database containing real data coming from HURDAT. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号