首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper we show that optimal trading results can be achieved if we can forecast a key summary statistic of future prices. Consider the following optimization problem. Let the return ri (over time i=1, 2, ..., n) for the ith day be given and the investor has to make investment decision di on the ith day with di=1 representing a ‘long' position and di=0 a ‘neutral' position. The investment return is given by rni=1ridicΣn+1i=1didi−1∣, where c is the transaction cost. The mathematical programming problem of choosing d1, ..., dn to maximize r under a given transaction cost c is shown to have an analytic solution, which is a function of a key summary statistic called the largest change before reversal. The largest change before reversal is recommended to be used as an output in a neural network for the generation of trading signals. When neural network forecasting is applied to a dataset of Hang Seng Index Futures Contract traded in Hong Kong, it is shown that forecasting the largest change before reversal outperforms the k‐step‐ahead forecast in achieving higher trading profits. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

2.
A forecasting model for yt based on its relationship to exogenous variables (e.g. x?t) must use x?t, the forecast of x?t. An example is given where commercially available x?t's are sufficiently inaccurate that a univariate model for yt appears preferable. For a variety of types of models inclusion of an exogenous variable x?t is shown to worsen the yt forecasts whenever x?t must itself be forecast by x?t and MSE (x?t) > Var (x?t). Tests with forecasts from a variety of sources indicate that, with a few notable exceptions, MSE (x?t) > Var (x?t) is common for macroeconomic forecasts more than a quarter or two ahead. Thus, either:
  • (a) available medium range forecasts for many macroeconomic variables (e.g. the GNP growth rate) are not an improvement over the sample mean (so that such variables are not useful explanatory variables in forecasting models), and/or
  • (b) the suboptimization involved in directly replacing x?t by x?t is a luxury that we cannot afford.
  相似文献   

3.
Consider forecasting the economic variable Yt+h with predictors X t, where h is the forecast horizon. This paper introduces a semiparametric method that generates forecast intervals of Yt+h| X t from point forecast models. First, the point forecast model is estimated, thereby taking advantage of its predictive power. Then, nonparametric estimation of the conditional distribution function (CDF) of the forecast error conditional on X t builds the rest of the forecast distribution around the point forecast, from which symmetric and minimum‐length forecast intervals for Yt+h| X t can be constructed. Under mild regularity conditions, asymptotic analysis shows that (1) regardless of the quality of the point forecast model (i.e., it may be misspecified), forecast quantiles are consistent and asymptotically normal; (2) minimum length forecast intervals are consistent. Proposals for bandwidth selection and dimension reduction are made. Three sets of simulations show that for reasonable point forecast models the method has significant advantages over two existing approaches to interval forecasting: one that requires the point forecast model to be correctly specified, and one that is based on fully nonparametric CDF estimate of Yt+h| X t. An application to exchange rate forecasting is presented. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
Credibility models in actuarial science deal with multiple short time series where each series represents claim amounts of different insurance groups. Commonly used credibility models imply shrinkage of group-specific estimates towards their average. In this paper we model the claim size yu in group i and at time t as the sum of three independent components: yit = μr + δi + ?it. The first component, μt = μt?1 + mt, represents time-varying levels that are common to all groups. The second component, δi, represents random group offsets that are the same in all periods, and the third component represents independent measurement errors. In this paper we show how to obtain forecasts from this model and we discuss the nature of the forecasts, with particular emphasis on shrinkage. We also assess the forecast improvements that can be expected from such a model. Finally, we discuss an extension of the above model which also allows the group offsets to change over time. We assume that the offsets for different groups follow independent random walks.  相似文献   

5.
Let {Xt} be a stationary process with spectral density g(λ).It is often that the true structure g(λ) is not completely specified. This paper discusses the problem of misspecified prediction when a conjectured spectral density fθ(λ), θ∈Θ, is fitted to g(λ). Then, constructing the best linear predictor based on fθ(λ), we can evaluate the prediction error M(θ). Since θ is unknown we estimate it by a quasi‐MLE . The second‐order asymptotic approximation of is given. This result is extended to the case when Xt contains some trend, i.e. a time series regression model. These results are very general. Furthermore we evaluate the second‐order asymptotic approximation of for a time series regression model having a long‐memory residual process with the true spectral density g(λ). Since the general formulae of the approximated prediction error are complicated, we provide some numerical examples. Then we illuminate unexpected effects from the misspecification of spectra. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

6.
In examining stochastic models for commodity prices, central questions often revolve around time‐varying trend, stochastic convenience yield and volatility, and mean reversion. This paper seeks to assess and compare alternative approaches to modelling these effects, with focus on forecast performance. Three specifications are considered: (i) random‐walk models with GARCH and normal or Student‐t innovations; (ii) Poisson‐based jump‐diffusion models with GARCH and normal or Student‐t innovations; and (iii) mean‐reverting models that allow for uncertainty in equilibrium price. Our empirical application makes use of aluminium spot and futures price series at daily and weekly frequencies. Results show: (i) models with stochastic convenience yield outperform all other competing models, and for all forecast horizons; (ii) the use of futures prices does not always yield lower forecast error values compared to the use of spot prices; and (iii) within the class of (G)ARCH random‐walk models, no model uniformly dominates the other. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

7.
One important aspect concerning the analysis and forecasting of time series that is sometimes neglected is the relationship between a model and the sampling interval, in particular, when the observation is cumulative over the sampling period. This paper intends to study the temporal aggregation in Bayesian dynamic linear models (DLM). Suppose that a time series Yt is observed at time units t and the observations of the process are aggregated over r units of time, defining a new time series Zkri=1Yrk+i. The relevant factors explaining the variation of Zk can, and in general will, be different, depending on how the sampling interval r is chosen. It is shown that if Yt follows certain dynamic linear models, then the aggregated series can also be described by possibly different DLM. In the examples, the industrial production of Brazil is analysed under various aggregation periods and the results are compared. © 1997 John Wiley & Sons, Ltd.  相似文献   

8.
Recent studies on bootstrap prediction intervals for autoregressive (AR) model provide simulation findings when the lag order is known. In practical applications, however, the AR lag order is unknown or can even be infinite. This paper is concerned with prediction intervals for AR models of unknown or infinite lag order. Akaike's information criterion is used to estimate (approximate) the unknown (infinite) AR lag order. Small‐sample properties of bootstrap and asymptotic prediction intervals are compared under both normal and non‐normal innovations. Bootstrap prediction intervals are constructed based on the percentile and percentile‐t methods, using the standard bootstrap as well as the bootstrap‐after‐bootstrap. It is found that bootstrap‐after‐bootstrap prediction intervals show small‐sample properties substantially better than other alternatives, especially when the sample size is small and the model has a unit root or near‐unit root. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

9.
In this study we evaluate the forecast performance of model‐averaged forecasts based on the predictive likelihood carrying out a prior sensitivity analysis regarding Zellner's g prior. The main results are fourfold. First, the predictive likelihood does always better than the traditionally employed ‘marginal’ likelihood in settings where the true model is not part of the model space. Secondly, forecast accuracy as measured by the root mean square error (RMSE) is maximized for the median probability model. On the other hand, model averaging excels in predicting direction of changes. Lastly, g should be set according to Laud and Ibrahim (1995: Predictive model selection. Journal of the Royal Statistical Society B 57 : 247–262) with a hold‐out sample size of 25% to minimize the RMSE (median model) and 75% to optimize direction of change forecasts (model averaging). We finally apply the aforementioned recommendations to forecast the monthly industrial production output of six countries, beating for almost all countries the AR(1) benchmark model. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

10.
This paper investigates the effects of imposing invalid cointegration restrictions or ignoring valid ones on the estimation, testing and forecasting properties of the bivariate, first‐order, vector autoregressive (VAR(1)) model. We first consider nearly cointegrated VARs, that is, stable systems whose largest root, lmax, lies in the neighborhood of unity, while the other root, lmin, is safely smaller than unity. In this context, we define the ‘forecast cost of type I’ to be the deterioration in the forecasting accuracy of the VAR model due to the imposition of invalid cointegration restrictions. However, there are cases where misspecification arises for the opposite reasons, namely from ignoring cointegration when the true process is, in fact, cointegrated. Such cases can arise when lmax equals unity and lmin is less than but near to unity. The effects of this type of misspecification on forecasting will be referred to as ‘forecast cost of type II’. By means of Monte Carlo simulations, we measure both types of forecast cost in actual situations, where the researcher is led (or misled) by the usual unit root tests in choosing the unit root structure of the system. We consider VAR(1) processes driven by i.i.d. Gaussian or GARCH innovations. To distinguish between the effects of nonlinear dependence and those of leptokurtosis, we also consider processes driven by i.i.d. t(2) innovations. The simulation results reveal that the forecast cost of imposing invalid cointegration restrictions is substantial, especially for small samples. On the other hand, the forecast cost of ignoring valid cointegration restrictions is small but not negligible. In all the cases considered, both types of forecast cost increase with the intensity of GARCH effects. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
This article studies Man and Tiao's (2006) low‐order autoregressive fractionally integrated moving‐average (ARFIMA) approximation to Tsai and Chan's (2005b) limiting aggregate structure of the long‐memory process. In matching the autocorrelations, we demonstrate that the approximation works well, especially for larger d values. In computing autocorrelations over long lags for larger d value, using the exact formula one might encounter numerical problems. The use of the ARFIMA(0, d, d?1) model provides a useful alternative to compute the autocorrelations as a really close approximation. In forecasting future aggregates, we demonstrate the close performance of using the ARFIMA(0, d, d?1) model and the exact aggregate structure. In practice, this provides a justification for the use of a low‐order ARFIMA model in predicting future aggregates of long‐memory process. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

12.
An approach is proposed for obtaining estimates of the basic (disaggregated) series, xi, when only an aggregate series, yt, of k period non-overlapping sums of xi's is available. The approach is based on casting the problem in a dynamic linear model form. Then estimates of xi can be obtained by application of the Kalman filtering techniques. An ad hoc procedure is introduced for deriving a model form for the unobserved basic series from the observed model of the aggregates. An application of this approach to a set of real data is given.  相似文献   

13.
In this paper we examine how causality inference and forecasting within a bivariate VAR, consisting of y(t) and x(t), are affected by the omission of a third variable, w(t), which causes (a) none, (b) one, and (c) both variables in the bivariate system. We also derive conditions under which causality inference and forecasting are invariant to the selection of a bivariate or a trivariate model. The most general condition for the invariance of both causality and forecasting to model selection is shown to require the omitted variable not to cause any of the variables in the bivariate system, although it allows the omitted variable to be caused by the other two. We also show that the conditions for one-way causality inference to be invariant to model selection are not sufficient to ensure that forecasting will also be invariant to the model selected. Finally, we present a numerical illustration of the potential losses, in terms of the variance of the forecast, as a function of the forecast horizon and for alternative parameter values—they can be rather large, as the omission of a variable can make the incomplete model unstable. © 1997 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper we adopt a principal components analysis (PCA) to reduce the dimensionality of the term structure and employ autoregressive (AR) models to forecast principal components which, in turn, are used to forecast swap rates. Arguing in favour of structural variation, we propose data‐driven, adaptive model selection strategies based on the PCA/AR model. To evaluate ex ante forecasting performance for particular rates, distinct forecast features, such as mean squared errors, directional accuracy and directional forecast value, are considered. It turns out that, relative to benchmark models, the adaptive approach offers additional forecast accuracy in terms of directional accuracy and directional forecast value. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

15.
Value‐at‐risk (VaR) forecasting via a computational Bayesian framework is considered. A range of parametric models is compared, including standard, threshold nonlinear and Markov switching generalized autoregressive conditional heteroskedasticity (GARCH) specifications, plus standard and nonlinear stochastic volatility models, most considering four error probability distributions: Gaussian, Student‐t, skewed‐t and generalized error distribution. Adaptive Markov chain Monte Carlo methods are employed in estimation and forecasting. A portfolio of four Asia–Pacific stock markets is considered. Two forecasting periods are evaluated in light of the recent global financial crisis. Results reveal that: (i) GARCH models outperformed stochastic volatility models in almost all cases; (ii) asymmetric volatility models were clearly favoured pre crisis, while at the 1% level during and post crisis, for a 1‐day horizon, models with skewed‐t errors ranked best, while integrated GARCH models were favoured at the 5% level; (iii) all models forecast VaR less accurately and anti‐conservatively post crisis. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

16.
Zusammenfassung Die Struktur von Tl2Cl3 gehört wahrscheinlich der Raumgruppe D 3d 2 an. Die hexagonale Zelle hat die Dimensionena=14·3 k X,c=25·1 k X und enthält 32 Einheiten Tl2Cl3.  相似文献   

17.
Zusammenfassung Die Chemie der Polymyxine, einer Klasse von basischen Polypeptid-Antibiotika, begann 1954, als durch Gegenstromverteilung erstmals ein definierter Vertreter, das Polymyxin B1 in reiner Form isoliert werden konnte (L. C.Craig). Partialhydrolyse mit Mineralsäuren führte zum Schluss, dass es sich um Cyclohepta- oder Cyclooctapeptide mit Seitenketten handelt, die, -Diaminobuttersäure (Dab) enthalten (W.Hausmann).Amidartige Verknüpfung der Seitenkette mit einer Fettsäure [(+)-6-Methyloctansäure (MOA) oder 6-Methylheptansäure (IOA)] (S.Wilkinson) verleiht diesen Antibiotika den Charakter von Invertseifen. Sie sind besonders gegen gramnegative Erreger wirksam. Schwierigkeiten bereiteten die Aufklärung der Verknüpfungsweise der Seitenkette (- oder-) und die Beantwortung der Frage, ob das Molekül nebend-Phenylalanin in der Ringsequenz noch einend-, -Diaminobuttersäurerest in Nachbarschaft zur Fettsäure in der Seitenkette enthält. Synthetische Versuche mitd-, -Diaminobuttersäure an dieser Stelle führten zu hochaktiven Produkten, die aber mit natürlichem Polymyxin B1 nicht identisch waren. Entscheidende Fortschritte wurden mit dem bakteriellen Enzym Nagarse (T.Suzuki) erzielt, das schrittweise die Seitenkette bis zum Ringpeptid abbaut. Dabei ergab sich, dass den Polymyxinen die allgemeine Struktur eines Cycloheptapeptides mit-verknüpfter Seitenkette zukommt (Figur 4).Die Polymyxine B1 E1 (Colistin A) sowie Circulin A unterscheiden sich voneinander nur durch eine Variation in der gleichen Dipeptidsequenz des 7-gliedrigen Ringes. Die im Polymyxin B1 vorhandene Dipeptidsequenzd-Phe-l-Leu ist in Polymyxin E1 (Colistin A) durchd-Leu-l-Leu und in Circulin A durchd-Leu-Ll-Ile ersetzt. Im Polymyxin D1 ist neben dem Ersatz der entsprechenden Sequenz durchl-Leu-l-Thr noch ein, -Diaminobuttersäurerest der Seitenkette durch eind-Serin ausgetauscht. Die entsprechenden Verbindungen mit dem Index 2 unterscheiden sich von denjenigen mit dem Index 1 durch einen Austausch der (+)-6-Methyloctansäure durch 6-Methylheptansäure.Die Struktur von Polymyxin B1 E1 und Circulin A konnte durch Totalsynthese gesichert werden (K.Vogler). Weitere Fortschritte in der Erforschung der Natur der noch unbekannten Vertreter sind in Kürze zu erwarten.  相似文献   

18.
Summary The Noetherian surfaceF 4 (3) , which is represented on a plane by a linear 3 system ofC 9(A 1 3 A 2 3 A 3 3 A 4 3 A 5 3 A 6 3 A 7 3 A 8 3 A 9 2 A 10), possesses generally only one linear pencil of elliptic cubics. IfA i (i=1, 2, , 9) are the basis points of aHalphen pencil ofC 9,A 10 is infinitely near toA 9, and in this caseF 4 (3) is a not trivial example of such a surface with two pencils of elliptic cubics.  相似文献   

19.
Zusammenfassung Eine direkte Synthese vonl- undd-Cycloserin ausl- undd-Serin wurde beschrieben. Die bakteriostatischen Effekte der drei Formen von Cycloserin (d-,l-unddl-) aufE. coli in synthetischem Medium wurden verglichen.  相似文献   

20.
Summary The hemerythrin-containing coelomic fluid ofPriapulus caudatus shows a relatively low O2 affinity (half-saturation O2 tension P50=8 mm at 10 °C) and a low O2 capacity (near 1 vol.%). O2 affinity is independent of pH but shows a large temperature sensitivity. A major role as a continuous O2 transporter seems to be excluded.Acknowledgments. A major part of this work was carried out at the Kristineberg Marine Laboratory, Fiskebäckskil (Sweden) and in the Zoophysiology Department, Aarhus Universitet (Denmark).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号