首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Online search data provide us with a new perspective for quantifying public concern about animal diseases, which can be regarded as a major external shock to price fluctuations. We propose a modeling framework for pork price forecasting that incorporates online search data with support vector regression model. This novel framework involves three main steps: that is, formulation of the animal diseases composite indexes (ADCIs) based on online search data; forecast with the original ADCIs; and forecast improvement with the decomposed ADCIs. Considering that there are some noises within the online search data, four decomposition techniques are introduced: that is, wavelet decomposition, empirical mode decomposition, ensemble empirical mode decomposition, and singular spectrum analysis. The experimental study confirms the superiority of the proposed framework, which improves both the level and directional prediction accuracy. With the SSA method, the noise within the online search data can be removed and the performance of the optimal model is further enhanced. Owing to the long-term effect of diseases outbreak on price volatility, these improvements are more prominent in the mid- and long-term forecast horizons.  相似文献   

2.
A unified method to detect and handle innovational and additive outliers, and permanent and transient level changes has been presented by R. S. Tsay. N. S. Balke has found that the presence of level changes may lead to misidentification and loss of test‐power, and suggests augmenting Tsay's procedure by conducting an additional disturbance search based on a white‐noise model. While Tsay allows level changes to be either permanent or transient, Balke considers only the former type. Based on simulated series with transient level changes this paper investigates how Balke's white‐noise model performs both when transient change is omitted from the model specification and when it is included. Our findings indicate that the alleged misidentification of permanent level changes may be influenced by the restrictions imposed by Balke. But when these restrictions are removed, Balke's procedure outperforms Tsay's in detecting changes in the data‐generating process. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

3.
TCR-mediated specific recognition of antigenic peptides in the context of classical MHC molecules is a cornerstone of adaptive immunity of jawed vertebrate. Ancillary to these interactions, the T cell repertoire also includes unconventional T cells that recognize endogenous and/or exogenous antigens in a classical MHC-unrestricted manner. Among these, the mammalian nonclassical MHC class I-restricted invariant T cell (iT) subsets, such as iNKT and MAIT cells, are now believed to be integral to immune response initiation as well as in orchestrating subsequent adaptive immunity. Until recently the evolutionary origins of these cells were unknown. Here we review our current understanding of a nonclassical MHC class I-restricted iT cell population in the amphibian Xenopus laevis. Parallels with the mammalian iNKT and MAIT cells underline the crucial biological roles of these evolutionarily ancient immune subsets.  相似文献   

4.
Forecasts from quarterly econometric models are typically revised on a monthly basis to reflect the information in current economic data. The revision process usually involves setting targets for the quarterly values of endogenous variables for which monthly observations are available and then altering the intercept terms in the quarterly forecasting model to achieve the target values. A formal statistical approach to the use of monthly data to update quarterly forecasts is described and the procedure is applied to the Michigan Quarterly Econometric Model of the US Economy. The procedure is evaluated in terms of both ex post and ex ante forecasting performance. The ex ante results for 1986 and 1987 indicate that the method is quite promising. With a few notable exceptions, the formal procedure produces forecasts of GNP growth that are very close to the published ex ante forecasts.  相似文献   

5.
Internet search data could be a useful source of information for policymakers when formulating decisions based on their understanding of the current economic environment. This paper builds on earlier literature via a structured value assessment of the data provided by Google Trends. This is done through two empirical exercises related to the forecasting of changes in UK unemployment. Firstly, economic intuition provides the basis for search term selection, with a resulting Google indicator tested alongside survey‐based variables in a traditional forecasting environment. Secondly, this environment is expanded into a pseudo‐time nowcasting framework which provides the backdrop for assessing the timing advantage that Google data have over surveys. The framework is underpinned by a MIDAS regression which allows, for the first time, the easy incorporation of Internet search data at its true sampling rate into a nowcast model for predicting unemployment. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

6.
The primary aim of this paper is to select an appropriate power transformation when we use ARMA models for a given time series. We propose a Bayesian procedure for estimating the power transformation as well as other parameters in time series models. The posterior distributions of interest are obtained utilizing the Gibbs sampler, a Markov Chain Monte Carlo (MCMC) method. The proposed methodology is illustrated with two real data sets. The performance of the proposed procedure is compared with other competing procedures. © 1997 John Wiley & Sons, Ltd.  相似文献   

7.
The use of large datasets for macroeconomic forecasting has received a great deal of interest recently. Boosting is one possible method of using high‐dimensional data for this purpose. It is a stage‐wise additive modelling procedure, which, in a linear specification, becomes a variable selection device that iteratively adds the predictors with the largest contribution to the fit. Using data for the United States, the euro area and Germany, we assess the performance of boosting when forecasting a wide range of macroeconomic variables. Moreover, we analyse to what extent its forecasting accuracy depends on the method used for determining its key regularization parameter: the number of iterations. We find that boosting mostly outperforms the autoregressive benchmark, and that K‐fold cross‐validation works much better as stopping criterion than the commonly used information criteria. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

8.
集合枚举树是最大频繁项集挖据算法中常采用的数据结构。在此算法中,最大频繁项集的挖掘过程也可以看作对集合枚举树的搜索过程。为缩小对集合枚举树的搜索空间,本文提出了一种新颖而高效的剪枝方法:根据已挖掘得到的最大频繁模式动态排列枚举树节点的顺序,最大限度的施行剪枝,从而缩小搜索空间。该算法采用位图的数据格式与深度优先的搜索策略。实验结果表明,该算法能有效提高最大频繁项集的挖掘效率,在采用相同的测试数据情况下,效率优于FPMax。  相似文献   

9.
10.
Standard statistical loss functions, such as mean‐squared error, are commonly used for evaluating financial volatility forecasts. In this paper, an alternative evaluation framework, based on probability scoring rules that can be more closely tailored to a forecast user's decision problem, is proposed. According to the decision at hand, the user specifies the economic events to be forecast, the scoring rule with which to evaluate these probability forecasts, and the subsets of the forecasts of particular interest. The volatility forecasts from a model are then transformed into probability forecasts of the relevant events and evaluated using the selected scoring rule and calibration tests. An empirical example using exchange rate data illustrates the framework and confirms that the choice of loss function directly affects the forecast evaluation results. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

11.
Google Trends data is a dataset increasingly employed for many statistical investigations. However, care should be placed in handling this tool, especially when applied for quantitative prediction purposes. Being by design Internet user dependent, estimators based on Google Trends data embody many sources of uncertainty and instability. They are related, for example, to technical (e.g., cross-regional disparities in the degree of computer alphabetization, time dependency of Internet users), psychological (e.g., emotionally driven spikes and other form of data perturbations), linguistic (e.g., noise generated by double-meaning words). Despite the stimulating literature available today on how to use Google Trends data as a forecasting tool, surprisingly, to the best of the author's knowledge, it appears that to date no articles specifically devoted to the prediction of these data have been published. In this paper, a novel forecasting method, based on a denoiser of the wavelet type employed in conjunction with a forecasting model of the class SARIMA (seasonal autoregressive integrated moving average), is presented. The wavelet filter is iteratively calibrated according to a bounded search algorithm, until a minimum of a suitable loss function is reached. Finally, empirical evidence is presented to support the validity of the proposed method.  相似文献   

12.
A widely used approach to evaluating volatility forecasts uses a regression framework which measures the bias and variance of the forecast. We show that the associated test for bias is inappropriate before introducing a more suitable procedure which is based on the test for bias in a conditional mean forecast. Although volatility has been the most common measure of the variability in a financial time series, in many situations confidence interval forecasts are required. We consider the evaluation of interval forecasts and present a regression‐based procedure which uses quantile regression to assess quantile estimator bias and variance. We use exchange rate data to illustrate the proposal by evaluating seven quantile estimators, one of which is a new non‐parametric autoregressive conditional heteroscedasticity quantile estimator. The empirical analysis shows that the new evaluation procedure provides useful insight into the quality of quantile estimators. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

13.
This paper examines the underdetermination between the Ptolemaic, Copernican, and the Tychonic theories of planetary motions and its attempted resolution by Kepler. I argue that past philosophical analyses of the problem of the planetary motions have not adequately grasped a method through which the underdetermination might have been resolved. This method involves a procedure of what I characterize as decomposition and identification. I show that this procedure is used by Kepler in the first half of the Astronomia Nova, where he ultimately claims to have refuted the Ptolemaic theory, thus partially overcoming the underdetermination. Finally, I compare this method with other views of scientific inference such as bootstrapping.  相似文献   

14.
Marine transport has grown rapidly as the result of globalization and sustainable world growth rates. Shipping market risks and uncertainty have also grown and need to be mitigated with the development of a more reliable procedure to predict changes in freight rates. In this paper, we propose a new forecasting model and apply it to the Baltic Dry Index (BDI). Such a model compresses, in an optimal way, information from the past in order to predict freight rates. To develop the forecasting model, we deploy a basic set of predictors, add lags of the BDI and introduce additional variables, in applying Bayesian compressed regression (BCR), with two important innovations. First, we include transition functions in the predictive set to capture both smooth and abrupt changes in the time path of BDI; second, we do not estimate the parameters of the transition functions, but rather embed them in the random search procedure inherent in BCR. This allows all coefficients to evolve in a time-varying manner, while searching for the best predictors within the historical set of data. The new procedures predict the BDI with considerable success.  相似文献   

15.
In this paper, we use Google Trends data for exchange rate forecasting in the context of a broad literature review that ties the exchange rate movements with macroeconomic fundamentals. The sample covers 11 OECD countries’ exchange rates for the period from January 2004 to June 2014. In out‐of‐sample forecasting of monthly returns on exchange rates, our findings indicate that the Google Trends search query data do a better job than the structural models in predicting the true direction of changes in nominal exchange rates. We also observed that Google Trends‐based forecasts are better at picking up the direction of the changes in the monthly nominal exchange rates after the Great Recession era (2008–2009). Based on the Clark and West inference procedure of equal predictive accuracy testing, we found that the relative performance of Google Trends‐based exchange rate predictions against the null of a random walk model is no worse than the purchasing power parity model. On the other hand, although the monetary model fundamentals could beat the random walk null only in one out of 11 currency pairs, with Google Trends predictors we found evidence of better performance for five currency pairs. We believe that these findings necessitate further research in this area to investigate the extravalue one can get from Google search query data.  相似文献   

16.
本文分析了IEEE802.1Q协议的工作原理,并以此为基础在Windows平台下对该协议进行仿真,利用其工作原理实现了虚拟局域网(VLAN)模块。该模块包含数据封包,建立通信的连接,发送协议数据包,目标端拆包判断是否接收四个部分。本模块可以禁止不同VLAN的数据通信,为网络管理人员分析VLAN故障提供了参考。  相似文献   

17.
Summary The spectrum of a chromophore may change as a result of perturbations in its environment. The spectral changes resulting from the perturbation are often followed by measurements at just one or two wavelengths but it is usually no more difficult to collect entire spectra. The problem comes in analysing the data from such a series of spectra. In this paper we will suggest a simple procedure in which the spectrum observed under any particular set of conditions may be considered to consist of the sum of two distinct spectral forms. The method, which is free of any assumptions regarding the quantitative relationship between the perturbation and the extent of spectral change, defines any given spectrum in terms of an apparent molar fraction of the contributing spectral forms. The variation of this apparent molar fraction provides information from which a quantitative relationship can be developed to describe the dependence of the spectral change on the perturbant. The method is illustrated using the model system of phenol red protonation and is applied to the characterization of the binding of azide ions to cobalt-substituted carbonic anhydrase.  相似文献   

18.
The spectrum of a chromophore may change as a result of perturbations in its environment. The spectral changes resulting from the perturbation are often followed by measurements at just one or two wavelengths but it is usually no more difficult to collect entire spectra. The problem comes in analysing the data from such a series of spectra. In this paper we will suggest a simple procedure in which the spectrum observed under any particular set of conditions may be considered to consist of the sum of two distinct spectral forms. The method, which is free of any assumptions regarding the quantitative relationship between the perturbation and the extent of spectral change, defines any given spectrum in terms of an apparent molar fraction of the contributing spectral forms. The variation of this apparent molar fraction provides information from which a quantitative relationship can be developed to describe the dependence of the spectral change on the perturbant. The method is illustrated using the model system of phenol red protonation and is applied to the characterization of the binding of azide ions to cobalt-substituted carbonic anhydrase.  相似文献   

19.
We develop in this paper an efficient way to select the best subset threshold autoregressive model. The proposed method uses a stochastic search idea. Differing from most conventional approaches, our method does not require us to fix the delay or the threshold parameters in advance. By adopting the Markov chain Monte Carlo techniques, we can identify the best subset model from a very large of number of possible models, and at the same time estimate the unknown parameters. A simulation experiment shows that the method is very effective. In its application to the US unemployment rate, the stochastic search method successfully selects lag one as the time delay and five best models from more than 4000 choices. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

20.
This paper addresses the issues of maximum likelihood estimation and forecasting of a long-memory time series with missing values. A state-space representation of the underlying long-memory process is proposed. By incorporating this representation with the Kalman filter, the proposed method allows not only for an efficient estimation of an ARFIMA model but also for the estimation of future values under the presence of missing data. This procedure is illustrated through an analysis of a foreign exchange data set. An investment scheme is developed which demonstrates the usefulness of the proposed approach. © 1997 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号