首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
Large high-dimensional data have posed great challenges to existing algorithms for frequent itemsets mining. To solve the problem, a hybrid method, consisting of a novel row enumeration algorithm and a column enumeration algorithm, is proposed. The intention of the hybrid method is to decompose the mining task into two subtasks and then choose appropriate algorithms to solve them respectively. The novel algorithm, i.e., Intertransaction is based on the characteristic that there are few common items between or among long transactions. In addition, an optimization technique is adopted to improve the performance of the intersection of bit-vectors. Experiments on synthetic data show that our method achieves high performance in large high-dimensional data.  相似文献   

2.
To the problem that it is hard to determine the clustering number and the abnormal points by using the clustering validity function, an effective clustering partition model based on the genetic algorithm is built in this paper. The solution to the problem is formed by the combination of the clustering partition and the encoding samples, and the fitness function is defined by the distances among and within clusters. The clustering number and the samples in each cluster are determined and the abnormal points are distinguished by implementing the triple random crossover operator and the mutation. Based on the known sample data, the results of the novel method and the clustering validity function are compared. Numerical experiments are given and the results show that the novel method is more effective.  相似文献   

3.
The paper studies the problem of incremental pattern mining from semi-structrued data. When a new dataset is added into the original dataset, it is difficult for existing pattern mining algorithms to incrementally update the mined results. To solve the problem, an incremental pattern mining algorithm based on the rightmost expansion technique is proposed here to improve the mining performance by utilizing the original mining results and information obtained in the previous mining process. To improve the efficiency, the algorithm adopts a pruning technique by using the frequent pattern expansion forest obtained in mining processes. Comparative experiments with different volume of initial datasets, incremental datasets and different minimum support thresholds demonstrate that the algorithm has a great improvement in the efficiency compared with that of non-incremental pattern mining algorithm.  相似文献   

4.
MICkNN: Multi-Instance Covering kNN Algorithm   总被引:1,自引:0,他引:1  
Mining from ambiguous data is very important in data mining. This paper discusses one of the tasks for mining from ambiguous data known as multi-instance problem. In multi-instance problem, each pattern is a labeled bag that consists of a number of unlabeled instances. A bag is negative if all instances in it are negative. A bag is positive if it has at least one positive instance. Because the instances in the positive bag are not labeled, each positive bag is an ambiguous. The mining aim is to classify unseen bags. The main idea of existing multi-instance algorithms is to find true positive instances in positive bags and convert the multi-instance problem to the supervised problem, and get the labels of test bags according to predict the labels of unknown instances. In this paper, we aim at mining the multi-instance data from another point of view, i.e., excluding the false positive instances in positive bags and predicting the label of an entire unknown bag. We propose an algorithm called Multi-Instance Covering kNN (MICkNN) for mining from multi-instance data. Briefly, constructive covering algorithm is utilized to restructure the structure of the original multi-instance data at first. Then, the kNN algorithm is applied to discriminate the false positive instances. In the test stage, we label the tested bag directly according to the similarity between the unseen bag and sphere neighbors obtained from last two steps. Experimental results demonstrate the proposed algorithm is competitive with most of the state-of-the-art multi-instance methods both in classification accuracy and running time.  相似文献   

5.
Considering the constantly increasing of data in large databases such as wire transfer database, incremental clustering algorithms play a more and more important role in Data Mining (DM). However, Few of the traditional clustering algorithms can not only handle the categorical data, but also explain its output clearly. Based on the idea of dynamic clustering, an incremental conceptive clustering algorithm is proposed in this paper. Which introduces the Semantic Core Tree (SCT) to deal with large volume of categorical wire transfer data for the detecting money laundering. In addition, the rule generation algorithm is presented here to express the clustering result by the format of knowledge. When we apply this idea in financial data mining, the efficiency of searching the characters of money laundering data will be improved.  相似文献   

6.
<正>This paper proposes,from the viewpoint of relation matrix,a new algorithm of attribute reduction for decision systems.Two new and relative reasonable indices are first defined to measure significance of the attributes in decision systems and then a heuristic algorithm of attribute reduction is formulated.Moreover,the time complexity of the algorithm is analyzed and it is proved to be complete.Some numerical experiments are also conducted to access the performance of the presented algorithm and the results demonstrate that it is not only effective but also efficient.  相似文献   

7.
We propose two models in this paper. The concept of association model is put forward to obtain the co-occurrence relationships among keywords in the documents and the hierarchical Hamming clustering model is used to reduce the dimensionality of the category feature vector space which can solve the problem of the extremely high dimensionality of the documents~ feature space. The results of experiment indicate that it can obtain the co-occurrence relations among keywords in the documents which promote the recall of classification system effectively. The hierarchical Hamming clustering model can reduce the dimensionality of the category feature vector efficiently, the size of the vector space is only about 10% of the primary dimensionality.  相似文献   

8.
This paper is to improve the speed of k-nearest-neighbor search and put forward algorithms related to tangent plane estimation based on existing methods. Starting from the points cloud, the algorithm segments the whole data into many different small cubes in space, and the size of cube is related to the density of the points cloud. Considering the position of the point in the cube, the algorithm enlarges the area around the given point step by step until the k-nearest-neighbor is accomplished. The neighbor’s least-squares tangent plane is estimated. In order to orient the planes, the k-nearest-neighbor is introduced into the problem of seeking the minimum spanning trees instead of searching the whole data. The research proved that the algorithms put forward in this paper were effective in processing data in short time and with high precision. The theory was useful for the practical application in reverse engineering and other areas related. Solution for finding k-nearest-neighbor problem, which still costs much time in present, was provided, and a propagation algorithm for orienting the planes was also discussed. The algorithm chose the orientation among the k-nearest-neighbor of the current point.  相似文献   

9.
This paper presents an effective clustering mode and a novel clustering result evaluating mode. Clustering mode has two limited integral parameters. Evaluating mode evaluates clustering results and gives each a mark. The higher mark the clustering result gains, the higher quality it has. By organizing two modes in different ways, we can build two clustering algorithms: SECDU(Self-Expanded Clustering Algorithm based on Density Units) and SECDUF(Self-Expanded Clustering Algorithm Based on Density Units with Evaluation Feedback Section). SECDU enumerates all value pairs of two parameters of clustering mode to process data set repeatedly and evaluates every clustering result by evaluating mode. Then SECDU output the clustering result that has the highest evaluating mark among all the ones. By applying "hill-climbing algorithm", SECDUF improves clustering efficiency greatly. Data sets that have different distribution features can be well adapted to both algorithms. SECDU and SECDUF can output high-quality clustering results. SECDUF tunes parameters of clustering mode automatically and no man's action involves through the whole process. In addition, SECDUF has a high clustering performance.  相似文献   

10.
In traditional data clustering, similarity of a cluster of objects is measured by distance between objects. Such measures are not appropriate for categorical data. A new clustering criterion to determine the similarity between points with categorical attributes is presented. Furthermore, a new clustering algorithm for categorical attributes is addressed. A single scan of the dataset yields a good clustering, and more additional passes can be used to improve the quality further.  相似文献   

11.
数据挖掘领域中的聚类方法   总被引:4,自引:0,他引:4  
聚类算法是数据挖掘中的核心技术,随着对聚类算法广泛深入的研究,产生了许多不同的适用于数据挖掘的聚类算法;文章从算法的角度论述了如何在数据挖掘中进行聚类分析,并通过基于评价聚类算法好坏的8个标准,对数据挖掘中近几年提出的常用聚类方法作了比较分析,以利于人们更容易、更快速的找到一种适用于特定问题的聚类算法.  相似文献   

12.
客户画像是商业银行近年来的研究热点,从高维复杂的客户数据中筛选出有效属性是客户画像中的关键问题.为解决商业银行客户画像过程中,由于银行客户数据维度较高,难以实现精准画像的问题,在对客户数据进行聚类分析的基础上,结合粗糙集理论和信息熵理论,对商业银行投资客户画像属性进行了约简,并提出了属性约简算法,结果表明,该算法能够对...  相似文献   

13.
无监督极限学习机在投影过程中保持原始高维空间中的稀疏或近邻结构,样本在高维空间中存在冗余信息,原始的数据结构不一定适应于投影后的低维特征空间.为此,结合无监督极限学习机和子空间聚类的自表示学习,提出投影自表示无监督极限学习机模型.该模型是面向聚类的特征提取方法,在投影过程中学习自表示子空间结构,从而使无监督极限学习机提取的特征自适应于聚类任务.在IRIS数据集、 6个基因表达和2个医学影像高维数据集上进行实验,结果表明该模型和算法是有效的.  相似文献   

14.
信息采集技术日益发展导致的高维、大规模数据,给数据挖掘带来了巨大挑战,针对K近邻分类算法在高维数据分类中存在效率低、时间成本高的问题,提出基于权重搜索树改进K近邻(K-nearest neighbor algorithm based on weight search tree,KNN-WST)的高维分类算法,该算法根据特征属性权重的大小,选取部分属性作为结点构建搜索树,通过搜索树将数据集划分为不同的矩阵区域,未知样本需查找搜索树获得最"相似"矩阵区域,仅与矩阵区域中的数据距离度量,从而降低数据规模,以减少时间复杂度.并研究和讨论最适合高维数据距离度量的闵式距离.6个标准高维数据仿真实验表明,KNN-WST算法对比K近邻分类算法、决策树和支持向量机(support vector machine,SVM)算法,分类时间显著减少,同时分类准确率也优于其他算法,具有更好的性能,有望为解决高维数据相关问题提供一定参考.  相似文献   

15.
针对传统聚类算法存在挖掘效率慢、 准确率低等问题, 提出一种基于最小生成树的多层次k-means聚类算法, 并应用于数据挖掘中. 先分析聚类样本的数据类型, 根据分析结果设计聚类准则函数; 再通过最小生成树对样本数据进行划分, 并选取初始聚类中心, 将样本的数据空间划分为矩形单元, 在矩形单元中对样本对象数据进行计算、 降序和选取, 得到有效的初始聚类中心, 减少数据挖掘时间. 实验结果表明, 与传统算法相比, 该算法可快速、 准确地挖掘数据, 且挖掘效率提升约50%.  相似文献   

16.
针对k-prototype算法在处理复杂的数据集时,常出现一些纯度不高的簇,影响了聚类质量的问题,提出一种基于k-prototype的多层次聚类改进算法,利用属性自动选择的方法将一些纯度不高的簇进行再聚类,以提高聚类质量.以UCI标准测试数据集进行实验,实验结果表明,该改进算法能够明显提高混合型数据集的聚类质量,并且在数据约简方面有良好表现.  相似文献   

17.
K -均值聚类算法在当前提取数据挖掘的聚类分析方法中已经取得了一定的成就,为了进一步改进其在数据预处理及神经网络结构中的应用,文中对算法进行了缺陷研究,主要做了以下几个方面的工作:对K-means算法进行了思路及算法主要流程分析;得出K-均值聚类算法存在简单、迅速、结果簇密集、簇与簇之间区别较为明显等优点;分析得出算法存在与处理符号属性的数据不太适应、必须事先给出k值(想要生成的簇的个数)、对“噪声数据”以及孤立的点数据有较大影响、需要不断计算更新调整后的新聚类中心等缺点。在实验验证中结果得出:聚类结果可知,选取不同的值初始值对聚类结果的影响很小;如果聚类数据集迭代次数较多时,可以尝试着改变其数据的输入顺序;变动数据集的输入顺序,会直接影响聚类结果。实验结果对于K-均值算法的工作效率提高了具有明显的参考价值,这一研究对于数据挖掘技术的改进具有一定的意义。  相似文献   

18.
离群点检测是数据挖掘的一个重要研究方向,大多数离群数据挖掘算法在应用到高维数据集时效率较低。给出了一种基于属性熵和加权余弦相似度的离群数据挖掘算法LEAWCD.该算法首先根据局部属性熵分析每个对象在其k-邻域内的局部离群属性,并依据各离群属性的属性偏离度自动设置属性权向量;其次使用对高维数据有效的余弦相似度经加权后度量各对象在k-邻域内的离群程度,实现高维局部离群点检测;最后采用国家天文台提供的天体光谱数据作为数据集,实验验证了LEAWCD算法具有伸缩性强和检测精度高等优点。  相似文献   

19.
聚类是机器学习和数据挖掘中的重要课题。近年来,深度神经网络(Deep Neural Networks,DNN)在各种聚类任务中受到广泛关注。特别是半监督聚类,在大量无监督数据中仅引入少量先验信息即可显著提高聚类性能。然而,这些聚类方法忽略了定义的聚类损失可能破坏特征空间,从而导致非代表性的无意义特征。针对现有半监督深度聚类的特征学习过程中局部结构保持有所欠缺的问题,本文提出一种改进的半监督深度嵌入聚类(Improved Semi-supervised Deep Embedded Clustering,ISDEC)算法,采用欠完备自动编码器在特征表达学习的同时,保持数据的内在局部结构;通过综合聚类损失、成对约束损失和重构损失,对聚类标签分配和特征表达进行联合优化。在包括基因数据在内的若干高维数据集上的实验结果表明,本方法的聚类性能比现有方法更好。  相似文献   

20.
鉴于高维数据的稀疏性和分类数据特点,探讨了专门针对高维分类数据的聚类方法.首先将原始数据集转换成频繁项集,再通过改造频繁模式树以及给出的剪切策略,挖掘出事务的最大频繁项集,并基于最大频繁项集(MFI)的两个属性,将具有相同MFI的对象归于一类,由此提出了基于最大频繁项集的聚类算法.通过对分类数据集的实验,表明该算法具有相当的稳定性、健壮性和有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号