首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Conventional wisdom holds that restrictions on low‐frequency dynamics among cointegrated variables should provide more accurate short‐ to medium‐term forecasts than univariate techniques that contain no such information; even though, on standard accuracy measures, the information may not improve long‐term forecasting. But inconclusive empirical evidence is complicated by confusion about an appropriate accuracy criterion and the role of integration and cointegration in forecasting accuracy. We evaluate the short‐ and medium‐term forecasting accuracy of univariate Box–Jenkins type ARIMA techniques that imply only integration against multivariate cointegration models that contain both integration and cointegration for a system of five cointegrated Asian exchange rate time series. We use a rolling‐window technique to make multiple out of sample forecasts from one to forty steps ahead. Relative forecasting accuracy for individual exchange rates appears to be sensitive to the behaviour of the exchange rate series and the forecast horizon length. Over short horizons, ARIMA model forecasts are more accurate for series with moving‐average terms of order >1. ECMs perform better over medium‐term time horizons for series with no moving average terms. The results suggest a need to distinguish between ‘sequential’ and ‘synchronous’ forecasting ability in such comparisons. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

2.
Issuing a going-concern opinion is a difficult and complex task for auditors. The auditors have to take into account different critical factors in order to make the right decision based on information obtained from the auditing process. This study adopts the so-called “random forest” approach (based on the ensemble method) to assist auditors in making such a decision. To investigate the corresponding effect of the proposed approach, we conduct a series of experiments and a performance comparison. The results show that the random forest method outperforms the baseline methods in terms of the accuracy rate, ROC area, kappa value, type II error, precision, and recall rate. The proposed approach is proven to be more accurate and stable than previous methods.  相似文献   

3.
This study proposes Gaussian processes to forecast daily hotel occupancy at a city level. Unlike other studies in the tourism demand prediction literature, the hotel occupancy rate is predicted on a daily basis and 45 days ahead of time using online hotel room price data. A predictive framework is introduced that highlights feature extraction and selection of the independent variables. This approach shows that the dependence on internal hotel occupancy data can be removed by making use of a proxy measure for hotel occupancy rate at a city level. Six forecasting methods are investigated, including linear regression, autoregressive integrated moving average and recent machine learning methods. The results indicate that Gaussian processes offer the best tradeoff between accuracy and interpretation by providing prediction intervals in addition to point forecasts. It is shown how the proposed framework improves managerial decision making in tourism planning.  相似文献   

4.
This paper examines a strategy for structuring one type of domain knowledge for use in extrapolation. It does so by representing information about causality and using this domain knowledge to select and combine forecasts. We use five categories to express causal impacts upon trends: growth, decay, supporting, opposing, and regressing. An identification of causal forces aided in the determination of weights for combining extrapolation forecasts. These weights improved average ex ante forecast accuracy when tested on 104 annual economic and demographic time series. Gains in accuracy were greatest when (1) the causal forces were clearly specified and (2) stronger causal effects were expected, as in longer-range forecasts. One rule suggested by this analysis was: ‘Do not extrapolate trends if they are contrary to causal forces.’ We tested this rule by comparing forecasts from a method that implicitly assumes supporting trends (Holt's exponential smoothing) with forecasts from the random walk. Use of the rule improved accuracy for 20 series where the trends were contrary; the MdAPE (Median Absolute Percentage Error) was 18% less for the random walk on 20 one-year ahead forecasts and 40% less for 20 six-year-ahead forecasts. We then applied the rule to four other data sets. Here, the MdAPE for the random walk forecasts was 17% less than Holt's error for 943 short-range forecasts and 43% less for 723 long-range forecasts. Our study suggests that the causal assumptions implicit in traditional extrapolation methods are inappropriate for many applications.  相似文献   

5.
Administration of melatonin in the drinking water (200 g/ml in 1% ethanol) decreased the time of re-entrainment of the circadian rhythm of the metabolic rate (measured as oxygen uptake) of domestic canaries (Serinus canaria) after 10-h delay phase shifts of the light-dark (LD) cycle by 1.3 days on average. Associated with faster re-entrainment, the amplitude of the metabolic rhythm was attenuated by 46% on, average on the first day after the shift as compared with about 25% in the controls. After re-entrainment, the amplitude of the metabolic rhythm during melatonin administration was about 23% lower than in the controls. The minimum resting metabolic rate increased by ca 5% on average during treatment with melatonin. The results are consistent with the hypothesis that constant high plasma levels of melatonin act on higher levels of the circadian oscillatory system rather than by directly affecting peripheral or central photoreceptors.  相似文献   

6.
In a conditional predictive ability test framework, we investigate whether market factors influence the relative conditional predictive ability of realized measures (RMs) and implied volatility (IV), which is able to examine the asynchronism in their forecasting accuracy, and further analyze their unconditional forecasting performance for volatility forecast. Our results show that the asynchronism can be detected significantly and is strongly related to certain market factors, and the comparison between RMs and IV on average forecast performance is more efficient than previous studies. Finally, we use the factors to extend the empirical similarity (ES) approach for combination of forecasts derived from RMs and IV.  相似文献   

7.
The conventional growth rate measures (such as month‐on‐month, year‐on‐year growth rates and 6‐month smoothed annualized rate adopted by the US Bureau of Labor Statistics and Economic Cycle Research Institute) are popular and can be easily obtained by computing the growth rate for monthly data based on a fixed comparison benchmark, although they do not make good use of the information underlying the economic series. By focusing on the monthly data, this paper proposes the k‐month kernel‐weighted annualized rate (k‐MKAR), which includes most existing growth rate measures as special cases. The proposed k‐MKAR measure involves the selection of smoothing parameters that are associated with the accuracy and timeliness for detecting the change in business turning points. That is, the comparison base is flexible and is likely to vary for different series under consideration. A data‐driven procedure depending upon the stepwise multiple reality check test for choosing the smoothing parameters is also suggested in this paper. The simple numerical evaluation and Monte Carlo experiment are conducted to confirm that our measures (in particular the two‐parameter k‐MKAR) improve the timeliness subject to a certain degree of accuracy. The business cycle signals issued by the Council for Economic Planning and Development over the period from 1998 to 2009 in Taiwan are taken as an example to illustrate the empirical application of our method. The empirical results show that the k‐MKAR‐based score lights are more capable of reflecting turning points earlier than the conventional year‐on‐year measure without sacrificing accuracy. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
本文根据氨基酸理化性质,基于氨基酸组成成分与自相关函数相结合特征提取法从非同源蛋白质序列中提取七个特征集,采用局部正确性的动态特征选择算法进行多特征组合来预测蛋白质结构类,并与各个特征集进行了比较。结果表明,DFS_LA算法的预测总精度较各个特征集均有不同程度的提高。Jackknife检验下,DFS_LA算法的预测总精度为82.80%,比COMP特征集提高8.91%;独立测试检验下,DFS_LA算法的预测总精度为86.67%,比COMP特征集提高11.67%。这说明DFS_A算法可有效提高结构类预测精度,多特征组合能在一定程度上更多地反映蛋白质的空间结构信息。  相似文献   

9.
Economists, like other forecasters, share knowledge, data and theories in common. Consequently, their forecast errors are likely to be highly dependent. This paper reports on an empirical study of 16 macroeconomic forecasters. Composite forecasts are computed using a sequential weighting scheme that takes dependence into account; these are compared to a simple average and median forecasts. A within-sample composite is also calculated. Both these methods perform significantly better than the average or median of the forecasts. This improvement in accuracy is apparently because the dependence between the forecasters' errors is so high that the optimal composite forecasts sometimes lie outside the range of the individual forecasts.  相似文献   

10.
Ropalidia marginata is a primitively eusocial polistine wasp in which, although there is only one queen at any given time, frequent queen replacements lead to a system of serial polygyny. One of the most striking features of this system, is the enormous variation in the success of different queens. Measuring queen success as queen tenure, total number of offspring produced, number of offspring produced per day of tenure, and proportion of eggs laid that develop into adults, we show here that each measure of queen success is correlated with worker-brood genetic relatedness and not correlated with worker: brood ratio or the age of the queen at takeover. We interpret these results as meaning that queens are better able to obtain the cooperation of workers when worker-brood genetic relatedness is high.  相似文献   

11.
We propose an economically motivated forecast combination strategy in which model weights are related to portfolio returns obtained by a given forecast model. An empirical application based on an optimal mean–variance bond portfolio problem is used to highlight the advantages of the proposed approach with respect to combination methods based on statistical measures of forecast accuracy. We compute average net excess returns, standard deviation, and the Sharpe ratio of bond portfolios obtained with nine alternative yield curve specifications, as well as with 12 different forecast combination strategies. Return‐based forecast combination schemes clearly outperformed approaches based on statistical measures of forecast accuracy in terms of economic criteria. Moreover, return‐based approaches that dynamically select only the model with highest weight each period and discard all other models delivered even better results, evidencing not only the advantages of trimming forecast combinations but also the ability of the proposed approach to detect best‐performing models. To analyze the robustness of our results, different levels of risk aversion and a different dataset are considered.  相似文献   

12.
We investigate the accuracy of capital investment predictors from a national business survey of South African manufacturing. Based on data available to correspondents at the time of survey completion, we propose variables that might inform the confidence that can be attached to their predictions. Having calibrated the survey predictors' directional accuracy, we model the probability of a correct directional prediction using logistic regression with the proposed variables. For point forecasting, we compare the accuracy of rescaled survey forecasts with time series benchmarks and some survey/time series hybrid models. In addition, using the same set of variables, we model the magnitude of survey prediction errors. Directional forecast tests showed that three out of four survey predictors have value but are biased and inefficient. For shorter horizons we found that survey forecasts, enhanced by time series data, significantly improved point forecasting accuracy. For longer horizons the survey predictors were at least as accurate as alternatives. The usefulness of the more accurate of the predictors examined is enhanced by auxiliary information, namely the probability of directional accuracy and the estimated error magnitude.  相似文献   

13.
We propose a method for improving the predictive ability of standard forecasting models used in financial economics. Our approach is based on the functional partial least squares (FPLS) model, which is capable of avoiding multicollinearity in regression by efficiently extracting information from the high‐dimensional market data. By using its well‐known ability, we can incorporate auxiliary variables that improve the predictive accuracy. We provide an empirical application of our proposed methodology in terms of its ability to predict the conditional average log return and the volatility of crude oil prices via exponential smoothing, Bayesian stochastic volatility, and GARCH (generalized autoregressive conditional heteroskedasticity) models, respectively. In particular, what we call functional data analysis (FDA) traces in this article are obtained via the FPLS regression from both the crude oil returns and auxiliary variables of the exchange rates of major currencies. For forecast performance evaluation, we compare out‐of‐sample forecasting accuracy of the standard models with FDA traces to the accuracy of the same forecasting models with the observed crude oil returns, principal component regression (PCR), and least absolute shrinkage and selection operator (LASSO) models. We find evidence that the standard models with FDA traces significantly outperform our competing models. Finally, they are also compared with the test for superior predictive ability and the reality check for data snooping. Our empirical results show that our new methodology significantly improves predictive ability of standard models in forecasting the latent average log return and the volatility of financial time series.  相似文献   

14.
This paper proposes a new approach to forecasting intermittent demand by considering the effects of external factors. We classify intermittent demand data into two parts—zero value and nonzero value—and fit nonzero values into a mixed zero-truncated Poisson model. All the parameters in this model are obtained by an EM algorithm, which regards external factors as independent variables of a logistic regression model and log-linear regression model. We then calculate the probability of occurrence of zero value at each period and predict demand occurrence by comparing it with critical value. When demand occurs, we use the weighted average of the mixed zero-truncated Poisson model as predicted nonzero demands, which are combined with predicted demand occurrences to form the final forecasting demand series. Two performance measures are developed to assess the forecasting methods. By presenting a case study of electric power material from the State Grid Shanghai Electric Power Company in China, we show that our approach provides greater accuracy in forecasting than the Poisson model, the hurdle shifted Poisson model, the hurdle Poisson model, and Croston's method.  相似文献   

15.
在沈阳市郊浑河和蒲河两岸开展不同植被配置方式下,开展河岸植被缓冲带对地表径流中铵氮污染消减研究。结果表明:河岸植被缓冲带越宽,水中铵氮浓度越高,其对地表径流中铵氮的消减作用越明显;天然植被比人工植被能更好的去除铵氮污染物;当污染浓度升高时,林草混合配置的植被带对铵氮表现出较好的消减率;人工林草地和天然林草地对铵氮的平均消减率分别为31%和25.9%,最高消减率分别为57.9%和63.3%,最高消减率均发生在7m宽度河岸带上,而人工林地对铵氮的消减效果较差。实验结果能够应用于指导自然河岸带的生态保护与人工河岸植被带的建设。  相似文献   

16.
Using a newly developed microcalorimetric approach to assess the rate of energy expenditure for intracellular [Ca2+] homeostasis in isolated muscles at rest, we found this was lower inmdx than in control mouse muscles, by 62% and 29% in soleus and extensor digitorum longus, respectively. Differences in total and Ca2+-dependent rates of specific heat production betweenmdx and control were enhanced during sustained, KCl-induced stimulation of energy dissipation. These results suggest that the low sacroplasmic energy status of dystrophic muscles is not due to any excessive energy expenditure for intracellular [Ca2+] homeostasis.  相似文献   

17.
In this paper a high-quality disaggregate database is utilized to examine whether individual forecasters produce efficient exchange rate predictions and also if the properties of the forecasts change when they are combined. The paper links a number of themes in the exchange rate literature and examines various methods of forecast combination. It is demonstrated, inter alia, that some forecasters are better than others, but that most are not as good as a naive no-change prediction. Combining forecasts adds to the accuracy of the predictions, but the gains mainly reflect the removal of systematic and unstable bias.  相似文献   

18.
Both international and US auditing standards require auditors to evaluate the risk of bankruptcy when planning an audit and to modify their audit report if the bankruptcy risk remains high at the conclusion of the audit. Bankruptcy prediction is a problematic issue for auditors as the development of a cause–effect relationship between attributes that may cause or be related to bankruptcy and the actual occurrence of bankruptcy is difficult. Recent research indicates that auditors only signal bankruptcy in about 50% of the cases where companies subsequently declare bankruptcy. Rough sets theory is a new approach for dealing with the problem of apparent indiscernibility between objects in a set that has had a reported bankruptcy prediction accuracy ranging from 76% to 88% in two recent studies. These accuracy levels appear to be superior to auditor signalling rates, however, the two prior rough sets studies made no direct comparisons to auditor signalling rates and either employed small sample sizes or non‐current data. This study advances research in this area by comparing rough set prediction capability with actual auditor signalling rates for a large sample of United States companies from the 1991 to 1997 time period. Prior bankruptcy prediction research was carefully reviewed to identify 11 possible predictive factors which had both significant theoretical support and were present in multiple studies. These factors were expressed as variables and data for 11 variables was then obtained for 146 bankrupt United States public companies during the years 1991–1997. This sample was then matched in terms of size and industry to 145 non‐bankrupt companies from the same time period. The overall sample of 291 companies was divided into development and validation subsamples. Rough sets theory was then used to develop two different bankruptcy prediction models, each containing four variables from the 11 possible predictive variables. The rough sets theory based models achieved 61% and 68% classification accuracy on the validation sample using a progressive classification procedure involving three classification strategies. By comparison, auditors directly signalled going concern problems via opinion modifications for only 54% of the bankrupt companies. However, the auditor signalling rate for bankrupt companies increased to 66% when other opinion modifications related to going concern issues were included. In contrast with prior rough sets theory research which suggested that rough sets theory offered significant bankruptcy predictive improvements for auditors, the rough sets models developed in this research did not provide any significant comparative advantage with regard to prediction accuracy over the actual auditors' methodologies. The current research results should be fairly robust since this rough sets theory based research employed (1) a comparison of the rough sets model results to actual auditor decisions for the same companies, (2) recent data, (3) a relatively large sample size, (4) real world bankruptcy/non‐bankruptcy frequencies to develop the variable classifications, and (5) a wide range of industries and company sizes. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

19.
Earnings forecasts have received a great deal of attention, much of which has centered on the comparative accuracy of judgmental and objective forecasting methods. Recently, studies have focused on the use of combinations of subjective and objective forecasts to improve forecast accuracy. This research offers an extension on this theme by subjectively modifying an objective forecast. Specifically, ARIMA forecasts are judgmentally adjusted by analysts using a structured approach based on Saaty's (1980) analytic hierarchy process. The results show that the accuracy of the unadjusted objective forecasts can be improved when judgmentally adjusted.  相似文献   

20.
连续型进化算法的计算时间复杂性分析是进化计算理论研究的一项公开难题,目前相关研究成果较少.针对连续型(1+1)EA,基于适应值差函数提出了平均增益模型及其分析方法,给出了平均计算时间的计算理论,为算法的计算时间复杂性分析提供了依据.在此基础上,研究还选取了学术界关注的球形函数作为研究对象,分别推导了变异步长满足标准正态分布和均匀分布的连续型(1+1)EA在优化球形函数时的平均增益,并估算出了它们的平均计算时间.理论分析说明:1)两种算法的计算时间复杂性都是指数级的;2)在给定相同精度和初始适应值差的前提下,采用均匀分布变异算子的算法其寻优速度优于采用标准正态分布变异算子的算法.进一步地,通过数值实验对理论分析结果进行了验证,结果表明平均增益模型分析是有效的.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号