首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper we present guaranteed‐content prediction intervals for time series data. These intervals are such that their content (or coverage) is guaranteed with a given high probability. They are thus more relevant for the observed time series at hand than classical prediction intervals, whose content is guaranteed merely on average over hypothetical repetitions of the prediction process. This type of prediction inference has, however, been ignored in the time series context because of a lack of results. This gap is filled by deriving asymptotic results for a general family of autoregressive models, thereby extending existing results in non‐linear regression. The actual construction of guaranteed‐content prediction intervals directly follows from this theory. Simulated and real data are used to illustrate the practical difference between classical and guaranteed‐content prediction intervals for ARCH models. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

2.
Every model leaves out or distorts some factors that are causally connected to its target phenomenon—the phenomenon that it seeks to predict or explain. If we want to make predictions, and we want to base decisions on those predictions, what is it safe to omit or to simplify, and what ought a causal model to describe fully and correctly? A schematic answer: the factors that matter are those that make a difference to the target phenomenon. There are several ways to understand differencemaking. This paper advances a view as to which is the most relevant to the forecaster and the decision-maker. It turns out that the right notion of differencemaking for thinking about idealization in prediction is also the right notion for thinking about idealization in explanation; this suggests a carefully circumscribed version of Hempel’s famous thesis that there is a symmetry between explanation and prediction.  相似文献   

3.
宽带信号、高维信号、高分辨信号以及多传感器组网技术的快速发展,使得信号采集数据增长率高于数据存储增长率和信号处理速度增长率,信号处理进入了大数据时代.本文指出了大数据背景下的信号处理的关键问题.对于多传感器组网具有的多样性和复杂性的海量信号数据,必须进行信息融合.本文介绍了信号融合的主要模型方法,分析了信息融合技术的发展趋势.智能传感网技术能降低对信号处理和通信容量的要求,有效地在大数据中提取有价值数据.本文给出了智能传感器的基本结构,阐述了智能传感器的计算方法.结合大数据容量信号对高速实时处理的要求,介绍了高速数字信号处理芯片以及高性能硬件平台发展现状,展望了高速信号处理核心技术的发展动向.  相似文献   

4.
In this article, I analyze the coincidence of the prediction of the Earth–Sun distance carried out by Ptolemy in his Almagest and the one he carried out, with another method, in the Planetary Hypotheses. In both cases, the values obtained for the Earth–Sun distance are very similar, so that the great majority of historians have suspected that Ptolemy altered or at least selected the data in order to obtain this agreement. In this article, I will provide a reconstruction of some way in which Ptolemy could have altered or selected the data and subsequently will try to argue in favor of its historical plausibility.  相似文献   

5.
Summary The influence of a series of constant temperatures on some annual plant species was studied in otherwise constant conditions. There are big differences in the behaviour of the species. But also within a species the properties studied can have different optimal temperatures. These differences are especially characteristic in some cases for the properties of fertility and of the vegetative growth. One species, for instance, can under certain conditions be very high and can even have numerous big flowers, but the plants can be simultaneously entirely sterile.  相似文献   

6.
In the case of US national accounts the data are revised for the first few years and every decade, which implies that we do not really have the final data. In this paper we aim to predict the final data, using the preliminary data and/or the revised data. The following predictors are introduced and derived from a context of the non-linear filtering or smoothing problem, which are: (1) prediction of the final data of time t given the preliminary data up to time t- 1, (2) prediction of the final data of time t given the preliminary data up to time t, (3) prediction of the final data of time t given the preliminary data up to time T, (4) prediction of the final data of time t given the revised data up to time t -1 and the preliminary data up to time t- 1, and (5) prediction of the final data of time t given the revised data up to time t-1 and the preliminary data up to time t. It is shown that (5) is the best predictor but not too different from (3). The prediction problem is illustrated using US per capita consumption data.  相似文献   

7.
The results of recent replication studies suggest that false positive findings are a big problem in empirical finance. We contribute to this debate by reviewing a sample of articles dealing with the short-term directional forecasting of the prices of stocks, commodities, and currencies. Screening all relevant articles published in 2016 by one of the 96 journals covered by the Social Sciences Citation Index in the category “Business, Finance,” we select only those studies that use easily accessible data of daily or higher frequency. We examine each study in detail, from the selection of the dataset to the interpretation of the results. We also include empirical analyses to illustrate the shortcomings of certain approaches. There are three main findings from our review. First, the number of selected papers is very low, which is surprising even when the strict selection criteria are taken into account. Second, there are hardly any relevant studies that use high-frequency data—despite the hype about financial big data and machine learning. Third, the economic significance of the findings—for example, their usefulness for trading purposes—is questionable. In general, apparently good forecasting performance does not translate into profitability once realistic transaction costs and the effect of data snooping are taken into account. Other typical problems include unsuitable benchmarks, short evaluation periods, and nonoperational trading strategies.  相似文献   

8.
This paper examines small sample properties of alternative bias‐corrected bootstrap prediction regions for the vector autoregressive (VAR) model. Bias‐corrected bootstrap prediction regions are constructed by combining bias‐correction of VAR parameter estimators with the bootstrap procedure. The backward VAR model is used to bootstrap VAR forecasts conditionally on past observations. Bootstrap prediction regions based on asymptotic bias‐correction are compared with those based on bootstrap bias‐correction. Monte Carlo simulation results indicate that bootstrap prediction regions based on asymptotic bias‐correction show better small sample properties than those based on bootstrap bias‐correction for nearly all cases considered. The former provide accurate coverage properties in most cases, while the latter over‐estimate the future uncertainty. Overall, the percentile‐t bootstrap prediction region based on asymptotic bias‐correction is found to provide highly desirable small sample properties, outperforming its alternatives in nearly all cases. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

9.
Bankruptcy prediction methods based on a semiparametric logit model are proposed for simple random (prospective) and case–control (choice‐based; retrospective) data. The unknown parameters and prediction probabilities in the model are estimated by the local likelihood approach, and the resulting estimators are analyzed through their asymptotic biases and variances. The semiparametric bankruptcy prediction methods using these two types of data are shown to be essentially equivalent. Thus our proposed prediction model can be directly applied to data sampled from the two important designs. One real data example and simulations confirm that our prediction method is more powerful than alternatives, in the sense of yielding smaller out‐of‐sample error rates. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

10.
Case‐based reasoning (CBR) is a very effective and easily understandable method for solving real‐world problems. Business failure prediction (BFP) is a forecasting tool that helps people make more precise decisions. CBR‐based BFP is a hot topic in today's global financial crisis. Case representation is critical when forecasting business failure with CBR. This research describes a pioneer investigation on hybrid case representation by employing principal component analysis (PCA), a feature extraction method, along with stepwise multivariate discriminant analysis (MDA), a feature selection approach. In this process, sample cases are represented with all available financial ratios, i.e., features. Next, the stepwise MDA is used to select optimal features to produce a reduced‐case representation. Finally, PCA is employed to extract the final information representing the sample cases. All data signified by hybrid case representation are recorded in a case library, and the k‐nearest‐neighbor algorithm is used to make the forecasting. Thus we constructed a hybrid CBR (HCBR) by integrating hybrid case representation into the forecasting tool. We empirically tested the performance of HCBR with data collected for short‐term BFP of Chinese listed companies. Empirical results indicated that HCBR can produce more promising prediction performance than MDA, logistic regression, classical CBR, and support vector machine. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

11.
In this research we analyze a new approach for prediction of demand. In the studied market of performing arts the observed demand is limited by capacity of the house. Then one needs to account for demand censorship to obtain unbiased estimates of demand function parameters. The presence of consumer segments with different purposes of going to the theater and willingness-to-pay for performance and ticket characteristics causes a heterogeneity in theater demand. We propose an estimator for prediction of demand that accounts for both demand censorship and preferences heterogeneity. The estimator is based on the idea of classification and regression trees and bagging prediction aggregation extended for prediction of censored data. Our algorithm predicts and combines predictions for both discrete and continuous parts of censored data. We show that our estimator performs better in terms of prediction accuracy compared with estimators which account either for censorship or heterogeneity only. The proposed approach is helpful for finding product segments and optimal price setting.  相似文献   

12.
Econometric prediction accuracy for personal income forecasts is examined for a region of the United States. Previously published regional structural equation model (RSEM) forecasts exist ex ante for the state of New Mexico and its three largest metropolitan statistical areas: Albuquerque, Las Cruces and Santa Fe. Quarterly data between 1983 and 2000 are utilized at the state level. For Albuquerque, annual data from 1983 through 1999 are used. For Las Cruces and Santa Fe, annual data from 1990 through 1999 are employed. Univariate time series, vector autoregressions and random walks are used as the comparison criteria against structural equation simulations. Results indicate that ex ante RSEM forecasts achieved higher accuracy than those simulations associated with univariate ARIMA and random walk benchmarks for the state of New Mexico. The track records of the structural econometric models for Albuquerque, Las Cruces and Santa Fe are less impressive. In some cases, VAR benchmarks prove more reliable than RSEM income forecasts. In other cases, the RSEM forecasts are less accurate than random walk alternatives. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

13.
We present a methodology for estimation, prediction, and model assessment of vector autoregressive moving-average (VARMA) models in the Bayesian framework using Markov chain Monte Carlo algorithms. The sampling-based Bayesian framework for inference allows for the incorporation of parameter restrictions, such as stationarity restrictions or zero constraints, through appropriate prior specifications. It also facilitates extensive posterior and predictive analyses through the use of numerical summary statistics and graphical displays, such as box plots and density plots for estimated parameters. We present a method for computationally feasible evaluation of the joint posterior density of the model parameters using the exact likelihood function, and discuss the use of backcasting to approximate the exact likelihood function in certain cases. We also show how to incorporate indicator variables as additional parameters for use in coefficient selection. The sampling is facilitated through a Metropolis–Hastings algorithm. Graphical techniques based on predictive distributions are used for informal model assessment. The methods are illustrated using two data sets from business and economics. The first example consists of quarterly fixed investment, disposable income, and consumption rates for West Germany, which are known to have correlation and feedback relationships between series. The second example consists of monthly revenue data from seven different geographic areas of IBM. The revenue data exhibit seasonality, strong inter-regional dependence, and feedback relationships between certain regions.© 1997 John Wiley & Sons, Ltd.  相似文献   

14.
Many stock investors make investment decisions based on stock-price-related chip indicators. However, in addition to quantified data, financial news often has a nonnegligible impact on stock price. Nowadays, as new reviews are posted daily on social media, there may be value in using web opinions to improve the performance of stock price prediction. To this end, we use logistic regression to screen the chip indicators and establish a basic stock price prediction model. Then, we employ text mining technology to quantify the unstructured data of social media opinions on stock-related news into sentiment scores, which are found to correlate significantly with the change extent of the stock price. Based on the findings that the higher the sentiment scores, the lower the prediction accuracy of the logistic regression model, we propose an improved prediction approach that integrates sentiment scores into the logistic regression model. Our results show that the proposed model can improve the prediction accuracy for stock prices, and can thus provide a new reference for investment strategies for stock investors.  相似文献   

15.
Both international and US auditing standards require auditors to evaluate the risk of bankruptcy when planning an audit and to modify their audit report if the bankruptcy risk remains high at the conclusion of the audit. Bankruptcy prediction is a problematic issue for auditors as the development of a cause–effect relationship between attributes that may cause or be related to bankruptcy and the actual occurrence of bankruptcy is difficult. Recent research indicates that auditors only signal bankruptcy in about 50% of the cases where companies subsequently declare bankruptcy. Rough sets theory is a new approach for dealing with the problem of apparent indiscernibility between objects in a set that has had a reported bankruptcy prediction accuracy ranging from 76% to 88% in two recent studies. These accuracy levels appear to be superior to auditor signalling rates, however, the two prior rough sets studies made no direct comparisons to auditor signalling rates and either employed small sample sizes or non‐current data. This study advances research in this area by comparing rough set prediction capability with actual auditor signalling rates for a large sample of United States companies from the 1991 to 1997 time period. Prior bankruptcy prediction research was carefully reviewed to identify 11 possible predictive factors which had both significant theoretical support and were present in multiple studies. These factors were expressed as variables and data for 11 variables was then obtained for 146 bankrupt United States public companies during the years 1991–1997. This sample was then matched in terms of size and industry to 145 non‐bankrupt companies from the same time period. The overall sample of 291 companies was divided into development and validation subsamples. Rough sets theory was then used to develop two different bankruptcy prediction models, each containing four variables from the 11 possible predictive variables. The rough sets theory based models achieved 61% and 68% classification accuracy on the validation sample using a progressive classification procedure involving three classification strategies. By comparison, auditors directly signalled going concern problems via opinion modifications for only 54% of the bankrupt companies. However, the auditor signalling rate for bankrupt companies increased to 66% when other opinion modifications related to going concern issues were included. In contrast with prior rough sets theory research which suggested that rough sets theory offered significant bankruptcy predictive improvements for auditors, the rough sets models developed in this research did not provide any significant comparative advantage with regard to prediction accuracy over the actual auditors' methodologies. The current research results should be fairly robust since this rough sets theory based research employed (1) a comparison of the rough sets model results to actual auditor decisions for the same companies, (2) recent data, (3) a relatively large sample size, (4) real world bankruptcy/non‐bankruptcy frequencies to develop the variable classifications, and (5) a wide range of industries and company sizes. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

16.
An early example is von Neumann's and Charney's Princeton Meteorological Project in the period 1946–53 which ended with daily numerical prediction in less than 2 hours. After this stage, the questions of long-range forecasting and general circulation of the atmosphere became of greater importance. The late 1950s saw the emergence of an alternative: were atmospheric models used mainly for prediction or understanding? This controversial debate in particular occurred during an important colloquium in Tokyo in 1960 which gathered together J. Charney, E. Lorenz, A. Eliassen, and B. Saltzman, among others, and witnessed discussions on statistical methods for predictions and/or maximum simplification of dynamic equations. This phase ended in 1963 with Lorenzs seminal paper on “Deterministic non periodic flows.” (Received February 11, 2000)  相似文献   

17.
By means of a novel time-dependent cumulated variation penalty function, a new class of real-time prediction methods is developed to improve the prediction accuracy of time series exhibiting irregular periodic patterns: in particular, the breathing motion data of the patients during robotic radiation therapy. It is illustrated that for both simulated and empirical data involving changes in mean, trend, and amplitude, the proposed methods outperform existing forecasting methods based on support vector machines and artificial neural network in terms of prediction accuracy. Moreover, the proposed methods are designed so that real-time updates can be done efficiently with O(1) computational complexity upon the arrival of a new signal without scanning the old data repeatedly.  相似文献   

18.
Since growth curves are often used to produce medium- to long-term forecasts for planning purposes, it is obviously of value to be able to associate an interval with the forecast trend. The problems in producing prediction intervals are well described by Chatfield. The additional problems in this context are the intrinsic non-linearity of the estimation procedure and the requirement for a prediction region rather than a single interval. The approaches considered are a Taylor expansion of the variance of the forecast values, an examination of the joint density of the parameter estimates, and bootstrapping. The performance of the resultant intervals is examined using simulated data sets. Prediction intervals for real data are produced to demonstrate their practical value.  相似文献   

19.
A methodology for estimating high‐frequency values of an unobserved multivariate time series from low‐frequency values of and related information to it is presented in this paper. This is an optimal solution, in the multivariate setting, to the problem of ex post prediction, disaggregation, benchmarking or signal extraction of an unobservable stochastic process. Also, the problem of extrapolation or ex ante prediction is optimally solved and, in this context, statistical tests are developed for checking online the ocurrence of extreme values of the unobserved time series and consistency of future benchmarks with the present and past observed information. The procedure is based on structural or unobserved component models, whose assumptions and specification are validated with the data alone. Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

20.
A combination of VAR estimation and state space model reduction techniques are examined by Monte Carlo methods in order to find good, simple to use, procedures for determining models which have reasonable prediction properties. The presentation is largely graphical. This helps focus attention on the aspects of the model determination problem which are relatively important for forecasting. One surprising result is that, for prediction purposes, knowledge of the true structure of the model generating the data is not particularly useful unless parameter values are also known. This is because the difficulty in estimating parameters of the true model causes more prediction error than results from a more parsimonious approximate model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号