首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Foreign exchange market prediction is attractive and challenging. According to the efficient market and random walk hypotheses, market prices should follow a random walk pattern and thus should not be predictable with more than about 50% accuracy. In this article, we investigate the predictability of foreign exchange spot rates of the US dollar against the British pound to show that not all periods are equally random. We used the Hurst exponent to select a period with great predictability. Parameters for generating training patterns were determined heuristically by auto‐mutual information and false nearest‐neighbor methods. Some inductive machine‐learning classifiers—artificial neural network, decision tree, k‐nearest neighbor, and naïve Bayesian classifier—were then trained with these generated patterns. Through appropriate collaboration of these models, we achieved a prediction accuracy of up to 67%. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

2.
It has been 8 years since the concept of naïve and primed pluripotent stem cell states was first proposed. Both are states of pluripotency, but exhibit slightly different properties. The naïve state represents the cellular state of the preimplantation mouse blastocyst inner cell mass, while the primed state is representative of the post-implantation epiblast cells. These two cell types exhibit clearly distinct developmental potential, as evidenced by the fact that naïve cells are able to contribute to blastocyst chimeras, while primed cells cannot. However, the epigenetic differences that underlie the distinct developmental potential of these cell types remain unclear, which is rather surprising given the large amount of active investigation over the years. Elucidating such epigenetic differences should lead to a better understanding of the fundamental properties of these states of pluripotency and the means by which the naïve-to-primed transition occurs, which may provide insights into the essence of stem cell commitment.  相似文献   

3.
This study establishes a benchmark for short‐term salmon price forecasting. The weekly spot price of Norwegian farmed Atlantic salmon is predicted 1–5 weeks ahead using data from 2007 to 2014. Sixteen alternative forecasting methods are considered, ranging from classical time series models to customized machine learning techniques to salmon futures prices. The best predictions are delivered by k‐nearest neighbors method for 1 week ahead; vector error correction model estimated using elastic net regularization for 2 and 3 weeks ahead; and futures prices for 4 and 5 weeks ahead. While the nominal gains in forecast accuracy over a naïve benchmark are small, the economic value of the forecasts is considerable. Using a simple trading strategy for timing the sales based on price forecasts could increase the net profit of a salmon farmer by around 7%.  相似文献   

4.
The paper summarizes results of a mail survey of the use of formal forecasting techniques in British manufacturing companies. It appraises the state of awareness of particular techniques and the extent to which they are used in various functional applications. The extent to which the forecasts generated by the techniques influence company action is assessed; and the reasons for the non-use of particular techniques examined. The paper concludes that although an increasing number of companies appreciate the importance of forecasting, the methods used are predominantly naïve and few companies are taking steps to improve the situation through using alternative techniques or through computerizing established techniques.  相似文献   

5.
For predicting forward default probabilities of firms, the discrete‐time forward hazard model (DFHM) is proposed. We derive maximum likelihood estimates for the parameters in DFHM. To improve its predictive power in practice, we also consider an extension of DFHM by replacing its constant coefficients of firm‐specific predictors with smooth functions of macroeconomic variables. The resulting model is called the discrete‐time varying‐coefficient forward hazard model (DVFHM). Through local maximum likelihood analysis, DVFHM is shown to be a reliable and flexible model for forward default prediction. We use real panel datasets to illustrate these two models. Using an expanding rolling window approach, our empirical results confirm that DVFHM has better and more robust out‐of‐sample performance on forward default prediction than DFHM, in the sense of yielding more accurate predicted numbers of defaults and predicted survival times. Thus DVFHM is a useful alternative for studying forward default losses in portfolios. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

6.
This paper discusses the use of preliminary data in econometric forecasting. The standard practice is to ignore the distinction between preliminary and final data, the forecasts that do so here being termed naïve forecasts. It is shown that in dynamic models a multistep‐ahead naïve forecast can achieve a lower mean square error than a single‐step‐ahead one, as it is less affected by the measurement noise embedded in the preliminary observations. The minimum mean square error forecasts are obtained by optimally combining the information provided by the model and the new information contained in the preliminary data, which can be done within the state space framework as suggested in numerous papers. Here two simple, in general suboptimal, methods of combining the two sources of information are considered: modifying the forecast initial conditions by means of standard regressions and using intercept corrections. The issues are explored using Italian national accounts data and the Bank of Italy Quarterly Econometric Model. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

7.
The Ohlson model is evaluated using quarterly data from stocks in the Dow Jones Index. A hierarchical Bayesian approach is developed to simultaneously estimate the unknown coefficients in the time series regression model for each company by pooling information across firms. Both estimation and prediction are carried out by the Markov chain Monte Carlo (MCMC) method. Our empirical results show that our forecast based on the hierarchical Bayes method is generally adequate for future prediction, and improves upon the classical method. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

8.
This research proposes a prediction model of multistage financial distress (MSFD) after considering contextual and methodological issues regarding sampling, feature and model selection criteria. Financial distress is defined as a three‐stage process showing different nature and intensity of financial problems. It is argued that applied definition of distress is independent of legal framework and its predictability would provide more practical solutions. The final sample is selected after industry adjustments and oversampling the data. A wrapper subset data mining approach is applied to extract the most relevant features from financial statement and stock market indicators. An ensemble approach using a combination of DTNB (decision table and naïve base hybrid model), LMT (logistic model tree) and A2DE (alternative N dependence estimator) Bayesian models is used to develop the final prediction model. The performance of all the models is evaluated using a 10‐fold cross‐validation method. Results showed that the proposed model predicted MSFD with 84.06% accuracy. This accuracy increased to 89.57% when a 33.33% cut‐off value was considered. Hence the proposed model is accurate and reliable to identify the true nature and intensity of financial problems regardless of the contextual legal framework.  相似文献   

9.
The motivation for this paper was the introduction of novel short‐term models to trade the FTSE 100 and DAX 30 exchange‐traded funds (ETF) indices. There are major contributions in this paper which include the introduction of an input selection criterion when utilizing an expansive universe of inputs, a hybrid combination of partial swarm optimizer (PSO) with radial basis function (RBF) neural networks, the application of a PSO algorithm to a traditional autoregressive moving model (ARMA), the application of a PSO algorithm to a higher‐order neural network and, finally, the introduction of a multi‐objective algorithm to optimize statistical and trading performance when trading an index. All the machine learning‐based methodologies and the conventional models are adapted and optimized to model the index. A PSO algorithm is used to optimize the weights in a traditional RBF neural network, in a higher‐order neural network (HONN) and the AR and MA terms of an ARMA model. In terms of checking the statistical and empirical accuracy of the novel models, we benchmark them with a traditional HONN, with an ARMA, with a moving average convergence/divergence model (MACD) and with a naïve strategy. More specifically, the trading and statistical performance of all models is investigated in a forecast simulation of the FTSE 100 and DAX 30 ETF time series over the period January 2004 to December 2015 using the last 3 years for out‐of‐sample testing. Finally, the empirical and statistical results indicate that the PSO‐RBF model outperforms all other examined models in terms of trading accuracy and profitability, even with mixed inputs and with only autoregressive inputs. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

10.
The purpose of this paper is twofold. Firstly, to assess the merit of estimating probability density functions rather than level or classification estimations on a one‐day‐ahead forecasting task of the EUR/USD time series. This is implemented using a Gaussian mixture model neural network, benchmarking the results against standard forecasting models, namely a naïve model, a moving average convergence divergence technical model (MACD), an autoregressive moving average model (ARMA), a logistic regression model (LOGIT) and a multi‐layer perceptron network (MLP). Secondly, to examine the possibilities of improving the trading performance of those models with confirmation filters and leverage. While the benchmark models perform best without confirmation filters and leverage, the Gaussian mixture model outperforms all of the benchmarks when taking advantage of the possibilities offered by a combination of more sophisticated trading strategies and leverage. This might be due to the ability of the Gaussian mixture model to identify successfully trades with a high Sharpe ratio. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

11.
In this paper, we apply Bayesian inference to model and forecast intraday trading volume, using autoregressive conditional volume (ACV) models, and we evaluate the quality of volume point forecasts. In the empirical application, we focus on the analysis of both in‐ and out‐of‐sample performance of Bayesian ACV models estimated for 2‐minute trading volume data for stocks quoted on the Warsaw Stock Exchange in Poland. We calculate two types of point forecasts, using either expected values or medians of predictive distributions. We conclude that, in general, all considered models generate significantly biased forecasts. We also observe that the considered models significantly outperform such benchmarks as the naïve or rolling means forecasts. Moreover, in terms of root mean squared forecast errors, point predictions obtained within the ACV model with exponential distribution emerge superior compared to those calculated in structures with more general innovation distributions, although in many cases this characteristic turns out to be statistically insignificant. On the other hand, when comparing mean absolute forecast errors, the median forecasts obtained within the ACV models with Burr and generalized gamma distribution are found to be statistically better than other forecasts.  相似文献   

12.
We investigate the predictive performance of various classes of value‐at‐risk (VaR) models in several dimensions—unfiltered versus filtered VaR models, parametric versus nonparametric distributions, conventional versus extreme value distributions, and quantile regression versus inverting the conditional distribution function. By using the reality check test of White (2000), we compare the predictive power of alternative VaR models in terms of the empirical coverage probability and the predictive quantile loss for the stock markets of five Asian economies that suffered from the 1997–1998 financial crisis. The results based on these two criteria are largely compatible and indicate some empirical regularities of risk forecasts. The Riskmetrics model behaves reasonably well in tranquil periods, while some extreme value theory (EVT)‐based models do better in the crisis period. Filtering often appears to be useful for some models, particularly for the EVT models, though it could be harmful for some other models. The CaViaR quantile regression models of Engle and Manganelli (2004) have shown some success in predicting the VaR risk measure for various periods, generally more stable than those that invert a distribution function. Overall, the forecasting performance of the VaR models considered varies over the three periods before, during and after the crisis. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper we deal with the prediction theory of long-memory time series. The purpose is to derive a general theory of the convergence of moments of the nonlinear least squares estimator so as to evaluate the asymptotic prediction mean squared error (PMSE). The asymptotic PMSE of two predictors is evaluated. The first is defined by the estimator of the differencing parameter, while the second is defined by a fixed differencing parameter: in other words, a parametric predictor of the seasonal autoregressive integrated moving average model. The effects of misspecifying the differencing parameter is a long-memory model are clarified by the asymptotic results relating to the PMSE. The finite sample behaviour of the predictor and the model selection in terms of PMSE of the two predictors are examined using simulation, and the source of any differences in behaviour made clear in terms of asymptotic theory. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper we assess opinion polls, prediction markets, expert opinion and statistical modelling over a large number of US elections in order to determine which perform better in terms of forecasting outcomes. In line with existing literature, we bias‐correct opinion polls. We consider accuracy, bias and precision over different time horizons before an election, and we conclude that prediction markets appear to provide the most precise forecasts and are similar in terms of bias to opinion polls. We find that our statistical model struggles to provide competitive forecasts, while expert opinion appears to be of value. Finally we note that the forecast horizon matters; whereas prediction market forecasts tend to improve the nearer an election is, opinion polls appear to perform worse, while expert opinion performs consistently throughout. We thus contribute to the growing literature comparing election forecasts of polls and prediction markets. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
We investigate the optimal structure of dynamic regression models used in multivariate time series prediction and propose a scheme to form the lagged variable structure called Backward‐in‐Time Selection (BTS), which takes into account feedback and multicollinearity, often present in multivariate time series. We compare BTS to other known methods, also in conjunction with regularization techniques used for the estimation of model parameters, namely principal components, partial least squares and ridge regression estimation. The predictive efficiency of the different models is assessed by means of Monte Carlo simulations for different settings of feedback and multicollinearity. The results show that BTS has consistently good prediction performance, while other popular methods have varying and often inferior performance. The prediction performance of BTS was also found the best when tested on human electroencephalograms of an epileptic seizure, and for the prediction of returns of indices of world financial markets.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Adaptive immunity critically contributes to control acute infection with enteropathogenic Yersinia pseudotuberculosis; however, the role of CD4+ T cell subsets in establishing infection and allowing pathogen persistence remains elusive. Here, we assessed the modulatory capacity of Y. pseudotuberculosis on CD4+ T cell differentiation. Using in vivo assays, we report that infection with Y. pseudotuberculosis resulted in enhanced priming of IL-17-producing T cells (Th17 cells), whereas induction of Foxp3+ regulatory T cells (Tregs) was severely disrupted in gut-draining mesenteric lymph nodes (mLNs), in line with altered frequencies of tolerogenic and proinflammatory dendritic cell (DC) subsets within mLNs. Additionally, by using a DC-free in vitro system, we could demonstrate that Y. pseudotuberculosis can directly modulate T cell receptor (TCR) downstream signaling within naïve CD4+ T cells and Tregs via injection of effector molecules through the type III secretion system, thereby affecting their functional properties. Importantly, modulation of naïve CD4+ T cells by Y. pseudotuberculosis resulted in an enhanced Th17 differentiation and decreased induction of Foxp3+ Tregs in vitro. These findings shed light to the adjustment of the Th17-Treg axis in response to acute Y. pseudotuberculosis infection and highlight the direct modulation of CD4+ T cell subsets by altering their TCR downstream signaling.  相似文献   

17.
The existence of unitarily inequivalent representations in quantum field theory has been presented as a serious problem for structural realism. In this paper I explore two possible responses. The first involves adopting Wallace's ‘naïve Lagrangian’ interpretation of QFT and dismissing the generation of inequivalent representations as either a mathematical artefact or as non-pathological. The second takes up Ruetsche's ‘Swiss Army Knife’ approach and understands the relevant structure as spanning a range of possibilities. Both options present interesting implications for structural realism and I shall also consider related issues to do with underdetermination, the significance of spontaneous symmetry breaking and how we should understand superselection rules in the context of quantum statistics. Finally, I shall suggest a way in which these options might be combined.  相似文献   

18.
During the past two decades of research in T cell biology, an increasing number of distinct T cell subsets arising during the transition from naïve to antigen-experienced T cells have been identified. Recently, it has been appreciated that, in different experimental settings, distinct T cell subsets can be generated in parallel within the same immune response. While signals driving a single “lineage” path of T cell differentiation are becoming increasingly clear, it remains largely enigmatic how the phenotypic and functional diversification creating a multi-faceted T cell response is achieved. Here, we review current literature indicating that diversification is a stable trait of CD8+ T cell responses. We showcase novel technologies providing deeper insights into the process of diversification among the descendants of individual T cells, and introduce two models that emphasize either intrinsic noise or extrinsic signals as driving forces behind the diversification of single cell-derived T cell progeny populations in vivo.  相似文献   

19.
We consider the problem of online prediction when it is uncertain what the best prediction model to use is. We develop a method called dynamic latent class model averaging, which combines a state‐space model for the parameters of each of the candidate models of the system with a Markov chain model for the best model. We propose a polychotomous regression model for the transition weights to assume that the probability of a change in time depends on the past through the values of the most recent time periods and spatial correlation among the regions. The evolution of the parameters in each submodel is defined by exponential forgetting. This structure allows the ‘correct’ model to vary over both time and regions. In contrast to existing methods, the proposed model naturally incorporates clustering and prediction analysis in a single unified framework. We develop an efficient Gibbs algorithm for computation, and we demonstrate the value of our framework on simulated experiments and on a real‐world problem: forecasting IBM's corporate revenue. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

20.
We consider finite state-space non-homogeneous hidden Markov models for forecasting univariate time series. Given a set of predictors, the time series are modeled via predictive regressions with state-dependent coefficients and time-varying transition probabilities that depend on the predictors via a logistic/multinomial function. In a hidden Markov setting, inference for logistic regression coefficients becomes complicated and in some cases impossible due to convergence issues. In this paper, we aim to address this problem utilizing the recently proposed Pólya-Gamma latent variable scheme. Also, we allow for model uncertainty regarding the predictors that affect the series both linearly — in the mean — and non-linearly — in the transition matrix. Predictor selection and inference on the model parameters are based on an automatic Markov chain Monte Carlo scheme with reversible jump steps. Hence the proposed methodology can be used as a black box for predicting time series. Using simulation experiments, we illustrate the performance of our algorithm in various setups, in terms of mixing properties, model selection and predictive ability. An empirical study on realized volatility data shows that our methodology gives improved forecasts compared to benchmark models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号