首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 125 毫秒
1.
重入性漏洞在智能合约中普遍存在,可能造成巨大的经济损失.现有的基于符号执行的静态分析工具通过预设的规则判断漏洞是否存在,然而预设规则不全面可能会导致重入性漏洞的误报.为了避免误报,本研究尝试从软件测试用例生成的动态分析角度解决这一难题.本文将该应用场景抽象为存在重入性循环路径的路径覆盖测试用例自动生成问题,通过生成并执行覆盖重入性循环路径的测试用例来检测重入性漏洞.以鸽群算法为代表的群体智能算法是求解测试用例生成这类黑盒优化问题的常用方法.鸽群算法在整个决策空间内围绕种群最优解邻域搜索,然而,问题的最优解可能并不在该邻域内,导致路径覆盖率较低.为了提升鸽群算法的路径覆盖率,本文将利用流形启发式算子改进鸽群算法,使其分配更多的算力搜索与优化目标相关的子空间,从而提升鸽群算法求解效率,覆盖重入性循环路径.实验结果显示,改进后的流形鸽群算法能够更加高效地生成覆盖重入性循环路径的测试用例,检测出被测合约的重入性漏洞.与Oyente, Securify和Smartcheck这三个智能合约测试工具相比,本文提出的方法能够有效避免重入性漏洞的误报,在实验的8个被测试智能合约中重入性漏洞识别准确率分别...  相似文献   

2.
作为软件测试领域的一个基本问题和热点问题,面向路径的测试用例自动生成有着特殊的重要意义.面向路径的测试用例生成本质上是一个约束满足问题,并通过搜索算法求解.着眼于提升搜索算法的效率,本文提出了一种新的智能算法,将分支限界和爬山法进行了有机的整合,分支限界作为全局搜索算法,而爬山法作为局部搜索算法,发挥各自的优势来对测试用例的解空间进行搜索.  相似文献   

3.
软件系统开发完成后,验证其是否完成了软件设计说明书的所有功能并且与设计算法一致,是软件测试的一项重要工作.通过人工遍历分析源代码来完成实现与设计的一致性验证是复杂费力的,并且需要测试人员具备丰富的编程经验和较强的算法分析能力.论文提出了一种基于函数调用路径的软件实现自动验证方法.从设计文档和源代码两个方面出发,分别分析其函数调用关系,提取函数调用路径,生成功能簇模型.其中文档方面通过人工理解设计文档,确定函数调用关系,然后自动生成标准功能簇模型;源代码方面通过静态分析,自动获取函数调用关系,提取功能点特征,利用这些特征提取功能点的具体实现算法,自动生成软件的实际功能簇模型.对比两个功能簇模型,验证软件实现与设计的一致性.实验结果表明:算法能够准确获得软件系统的功能结构及实现算法特征,对软件实现与设计的一致性做出有效判定,为软件实现与设计的一致性自动化测试提出一种新的思路.  相似文献   

4.
针对球头铣刀的五轴数控加工,分析了刀轴方向对切削力的影响,根据刀轴方向变化的度量指标,提出了在刀触点网格的可行空间中整体光顺刀轴方向的模型与算法,该方法有两个优势:可以同时保证进给和相邻行两个方向上刀轴方向的整体光顺性;仅需要计算网格点处的可达方向锥,提高了计算效率.用仿真方法分析了整体光顺刀轴方向对加工效率、机床进给运动平稳性和切削力的影响,并用实验验证了规划的刀具路径.  相似文献   

5.
结合嵌入式软件的实时性、与硬件紧密结合等特点,采用UML对系统建模,引入场景技术描述系统预期的执行流程,提出了基于二叉树场景模型的测试用例生成方法。改善了人工设计测试用例时易发生的纰漏,如遗漏或冗余的测试用例、工作量大、效率低等问题,缩短了软件的开发周期。  相似文献   

6.
本文提出一种针对不确定性结构的区间鲁棒性优化方法.首先,采用区间模型度量目标函数和约束函数中的不确定变量和参数.然后引入鲁棒性评价因子来度量目标函数的鲁棒性,并采用区间可能度方法(RPDI)处理不确定约束,进而建立区间鲁棒性优化模型.针对区间鲁棒性优化求解效率较低的问题,提出一种高效的优化方法,该方法将双层嵌套优化问题解耦为区间分析和确定性优化方法序列求解的问题.在每一迭代步,根据在当前设计点下的区间分析结果构建一个等效确定性优化问题,然后通过求解该问题更新设计点.此外,本文还提出一种迭代机制,来提高整个优化过程的收敛速度.最后,给出了三个数值算例和一个工程算例验证本文方法的有效性.  相似文献   

7.
软件自动化测试技术的出现,大大减轻了测试人员的压力,显著提高了测试的效率,但是回归测试中的用例复用是测试自动化所面临的又一难题。本文提出了一种基于金融业务系统的功能测试基准库方法,对回归测试进行有效的支持,力求解决大型业务系统分析资源的积累和测试用例的有效复用等问题。这种方法在一定程度上减轻了测试人员管理测试用例的压力,增强了用例的可复用性,从而有效地提高了自动化测试的效率。  相似文献   

8.
可信计算已经成为国际信息安全领域的新热潮,可信计算平台产品开始走向应用.对于信息安全产品,不经过测评用户是无法放心应用的.本文围绕可信计算平台的测评问题,从建立适合测试的可信计算平台形式化模型入手,建立了一种基于SPA的信任链数学模型,并给出了其复合特性的分析验证方法,通过分析发现远程证明过程存在对可信系统发生安全性危害的潜在因素.针对可信软件栈,研究了其测试用例自动化生成问题,提出一种改进的随机测试用例生产方法,提高了测试用例的质量.最后给出了一种可信计算平台测试原型系统及相关实际测试数据.测试结果表明,该方法发现了现有TCG可信计算平台在设计体系上存在着一些缺陷,同时也发现了若干现有可信计算平台产品存在的缺陷,为可信计算平台技术及产品的改进和发展提供了依据.  相似文献   

9.
演化是计算机软件的固有特性.了解演化规律,可以改进软件演化过程,提高软件质量,降低维护成本.本文定义类依赖图,将软件系统描述为软件网络,基于网络度量验证软件演化规律,并讨论软件演化建模的可行性.首先针对Lehman的8项演化定律,设计3个方面的网络度量:网络规模、网络质量、以及结构控制.然后基于四个开源软件系统构建软件网络,统计网络度量;实证研究表明:支持Lehman的4项演化定律,否定另外3项,即复杂度提升、持续增长、以及质量折旧定律.最后分别采用E-R模型和模块依附模型模拟软件系统的演化过程,所生成的随机网络同软件网络的比较结果说明,软件演化存在客观规律,能够通过建模来重现甚至优化.  相似文献   

10.
为解决突发事件应急方案选择中信息不确定和随机性的问题,提出一种基于贝叶斯(Bayes)和蒙特卡洛(Monte Carlo)的风险决策方法。首先基于事发现场实际情况,应用贝叶斯决策理论,构建多元正态分布模型;然后针对权重随机性的特点,采用蒙特卡洛模拟算法进行模拟;通过计算各方案获得特定排名的可信度及整体排名可信度因子,得到方案排序,从而减小了不确定性信息对应急决策过程的影响。最后通过核电站事故应急决策的例子,验证了所提出方法模型的有效性和可行性。  相似文献   

11.
A case study in which a three-stage choice model of Canadian household vehicle holdings and usage is used to generate short-run forecasts of changes in household vehicle usage and gasoline consumption in response to a range of energy-related policies. The objectives of this case study are to (1) demonstrate the application of disaggregate choice modelling methods to the generation of policy-relevant forecasts of travel behaviour; (2) draw implications from this forecasting exercise concerning the likely impacts of various energy-related policies; and (3) assess some of the strengths and weaknesses of the current state-of-the-art of forecasting with disaggregate choice models, using the presented study as a case in point.  相似文献   

12.
为了解决数据挖掘本身的局限性,提出了适用于制造企业的基础数据的改进K-means算法。该算法应用AHP方法将原始数据根据其特征属性进行加权预处理,可以将基础数据进行更精确的细分,完善了基础数据的准备,为企业提供了更加直观科学的决策依据。并通过实例验证了该方法的可行性。  相似文献   

13.
在对已有的算法研究之后,提出了一种新的通风网络图绘制算法,采用分层法作为绘制算法的主框架,同时将最长路径法和遗传算法嵌入到分层法中,实现通风网络图的优化绘制,减少分支交叉数,并进行了开发实现和测试.  相似文献   

14.
We extract information on relative shopping interest from Google search volume and provide a genuine and economically meaningful approach to directly incorporate these data into a portfolio optimization technique. By generating a firm ranking based on a Google search volume metric, we can predict future sales and thus generate excess returns in a portfolio exercise. The higher the (shopping) search volume for a firm, the higher we rank the company in the optimization process. For a sample of firms in the fashion industry, our results demonstrate that shopping interest exhibits predictive content that can be exploited in a real‐time portfolio strategy yielding robust alphas around 5.5%.  相似文献   

15.
A deeper understanding of models is sought in considering what models do, rather what they are. This distinction emphasizes how two different modeling strategies, as they pursue different purposes, do invest in different options, in particular in regard to rigor and immediate empirical relevance. The analysis focuses therefore on the services expected from models by the scientists who construct them: models are sought for how they contribute to exploring and testing the context in which they operate. In a forthcoming Part II these general considerations will be anchored in the presentation of specific case studies.  相似文献   

16.
This paper shows how monthly data and forecasts can be used in a systematic way to improve the predictive accuracy of a quarterly macroeconometric model. The problem is formulated as a model pooling procedure (equivalent to non-recursive Kalman filtering) where a baseline quarterly model forecast is modified through ‘add-factors’ or ‘constant adjustments’. The procedure ‘automatically’ constructs these adjustments in a covariance-minimizing fashion to reflect the revised expectation of the quarterly model's forecast errors, conditional on the monthly information set. Results obtained using Federal Reserve Board models indicate the potential for significant reduction in forecast error variance through application of these procedures.  相似文献   

17.
A number of researchers have developed models that use test market data to generate forecasts of a new product's performance. However, most of these models have ignored the effects of marketing covariates. In this paper we examine what impact these covariates have on a model's forecasting performance and explore whether their presence enables us to reduce the length of the model calibration period (i.e. shorten the duration of the test market). We develop from first principles a set of models that enable us to systematically explore the impact of various model ‘components’ on forecasting performance. Furthermore, we also explore the impact of the length of the test market on forecasting performance. We find that it is critically important to capture consumer heterogeneity, and that the inclusion of covariate effects can improve forecast accuracy, especially for models calibrated on fewer than 20 weeks of data. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

18.
Co-integration analysis is used in a study of the advertising and sales relationship using the Lydia Pinkham data set. The series are shown to have a valid long-run relationship while Granger-causality runs in both directions. The latter is found by using a causality test involving the co-integration restrictions which seem to constitute a crucial part of such tests in the case of co-integrated variables. A comparison with previous models shows that forecasting co-integrated series is more accurate with error-correction systems, especially in the case of multi-step forecasting.  相似文献   

19.
In the present study we report on the development and test results of a Cartesian ARIMA Search Algorithm, designed for automatic generation of univariate models for time series data within specified parameter intervals of the identification and estimation stages. Model retention is determined within a preselected set of statistics. By interpreting these statistics as dimensions of the constructed criterion space, we obtain a subset of non-dominated models according to the rule of maximum dispersion over the efficient set. The CARIMA algorithm allows free specification of the number of criteria used in the runs. The algorithm was tested with both simulated and real economic data. The results based on simulated data indicate that the precision of the CARIMA algorithm is lower for seasonal models and higher for non-seasonal ones, thus suggesting an inverse relationship between algorithm performance and model complexity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号