首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 208 毫秒
1.
Divisive hierarchical clustering algorithms with the diameter criterion proceed by recursively selecting the cluster with largest diameter and partitioning it into two clusters whose largest diameter is smallest possible. We provide two such algorithms with complexitiesO( N 2) andO(N 2logN) respectively, where denotes the maximum number of clusters in a partition andN the number of entities to be clustered. The former algorithm, an efficient implementation of an algorithm of Hubert, allows to find all partitions into at most clusters and is inO(N 2) for fixed . Moreover, if in each partitioning the size of the largest cluster is bounded byp times the number of entities in the set to be partitioned, with 1/2<=p<1, it provides a complete hierarchy of partitionsO(N 2 logN) time. The latter algorithm, a refinement of an algorithm of Rao allows to build a complete hierarchy of partitions inO(N 2 logN) time without any restriction. Comparative computational experiments with both algorithms and with an agglomerative hierarchical algorithm of Benzécri are reported.
Résumé Les algorithmes de classification hiérarchique descendante utilisant le critère du diamètre, sélectionnent récursivement la classe de plus grand diamètre et la partitionnent en deux classes, dont le plus grand diamètre est le plus, petit possible. Nous proposons deux tels algorithmes, avec des complexités enO ( N2) etO(N 2 logN) respectivement, où désigne le nombre maximum de classes d'une partition etN le nombre d'objets à classifier. Le premier algorithme, une implantation d'un algorithme de Hubert, permet de construire des partitions avec au plus classes et est enO(N 2) pour fixé. De plus, si dans chaque bipartition le nombre d'objets de la plus grande classe, est borné parp fois le nombre d'objets de l'ensemble à partitionner, où 1/2≤p<1, cet algorithme permet de construire une hiérarchie complète de partitions en tempsO(N 2 logN). Le second algorithme, un raffinement d'un algorithme de Rao, permet de construire une hiérarchie complète de partitions en tempsO(N 2 logN) sans aucune restriction On présente également des résultats de calcul comparatifs pour les deux algorithmes et pour l'algorithme de classification hiérarchique ascendante de Benzécri.
  相似文献   

2.
We present an O(n 3)-time, O(n 2)-space algorithm to test whether a dissimilarity d on an n-object set X is Robinsonian, i.e., X admits an ordering such that i≤j≤k implies that d(x i,xk)≥max {d(xi,xj),d(xj,xk)}.  相似文献   

3.
Minimum sum of diameters clustering   总被引:1,自引:1,他引:0  
The problem of determining a partition of a given set ofN entities intoM clusters such that the sum of the diameters of these clusters is minimum has been studied by Brucker (1978). He proved that it is NP-complete forM3 and mentioned that its complexity was unknown forM=2. We provide anO(N 3 logN) algorithm for this latter case. Moreover, we show that determining a partition into two clusters which minimizes any given function of the diameters can be done inO(N 5) time.Acknowledgments: This research was supported by the Air Force Office of Scientific Research Grant AFOSR 0271 to Rutgers University. We are grateful to Yves Crama for several insightful remarks and to an anonymous referee for detailed comments.  相似文献   

4.
Proportional link linkage (PLL) clustering methods are a parametric family of monotone invariant agglomerative hierarchical clustering methods. This family includes the single, minimedian, and complete linkage clustering methods as special cases; its members are used in psychological and ecological applications. Since the literature on clustering space distortion is oriented to quantitative input data, we adapt its basic concepts to input data with only ordinal significance and analyze the space distortion properties of PLL methods. To enable PLL methods to be used when the numbern of objects being clustered is large, we describe an efficient PLL algorithm that operates inO(n 2 logn) time andO(n 2) space.This work was partially supported by the Natural Sciences and Engineering Research Council of Canada and by the Austrian Fonds zur Förderung der wissenschaftlichen Forschung.  相似文献   

5.
A dissimilarity D on a finite set S is said to be Robinsonian if S can be totally ordered in such a way that, for every i < j < k, D (i, j) ≤ D (i, k) and D (j, k) ≤ D (i, k). Intuitively, D is Robinsonian if S can be represented by points on a line. Recognizing Robinsonian dissimilarities has many applications in seriation and classification. In this paper, we present an optimal O (n 2) algorithm to recognize Robinsonian dissimilarities, where n is the cardinal of S. Our result improves the already known algorithms.  相似文献   

6.
O (n 4), where n is the number of objects. We describe the application of the MVR method to two data models: the weighted least-squares (WLS) model (V is diagonal), where the MVR method can be reduced to an O(n 3) time complexity; a model arising from the study of biological sequences, which involves a complex non-diagonal V matrix that is estimated from the dissimilarity matrix Δ. For both models, we provide simulation results that show a significant error reduction in the reconstruction of T, relative to classical agglomerative algorithms.  相似文献   

7.
Optimal algorithms for comparing trees with labeled leaves   总被引:2,自引:1,他引:1  
LetR n denote the set of rooted trees withn leaves in which: the leaves are labeled by the integers in {1, ...,n}; and among interior vertices only the root may have degree two. Associated with each interior vertexv in such a tree is the subset, orcluster, of leaf labels in the subtree rooted atv. Cluster {1, ...,n} is calledtrivial. Clusters are used in quantitative measures of similarity, dissimilarity and consensus among trees. For anyk trees inR n , thestrict consensus tree C(T 1, ...,T k ) is that tree inR n containing exactly those clusters common to every one of thek trees. Similarity between treesT 1 andT 2 inR n is measured by the numberS(T 1,T 2) of nontrivial clusters in bothT 1 andT 2; dissimilarity, by the numberD(T 1,T 2) of clusters inT 1 orT 2 but not in both. Algorithms are known to computeC(T 1, ...,T k ) inO(kn 2) time, andS(T 1,T 2) andD(T 1,T 2) inO(n 2) time. I propose a special representation of the clusters of any treeT R n , one that permits testing in constant time whether a given cluster exists inT. I describe algorithms that exploit this representation to computeC(T 1, ...,T k ) inO(kn) time, andS(T 1,T 2) andD(T 1,T 2) inO(n) time. These algorithms are optimal in a technical sense. They enable well-known indices of consensus between two trees to be computed inO(n) time. All these results apply as well to comparable problems involving unrooted trees with labeled leaves.The Natural Sciences and Engineering Research Council of Canada partially supported this work with grant A-4142.  相似文献   

8.
Givenk rooted binary treesA 1, A2, ..., Ak, with labeled leaves, we generateC, a unique system of lineage constraints on common ancestors. We then present an algorithm for constructing the set of rooted binary treesB, compatible with all ofA 1, A2, ..., Ak. The running time to obtain one such supertree isO(k 2 n2), wheren is the number of distinct leaves in all of the treesA 1, A2, ..., Ak.  相似文献   

9.
Efficient algorithms for agglomerative hierarchical clustering methods   总被引:11,自引:4,他引:7  
Whenevern objects are characterized by a matrix of pairwise dissimilarities, they may be clustered by any of a number of sequential, agglomerative, hierarchical, nonoverlapping (SAHN) clustering methods. These SAHN clustering methods are defined by a paradigmatic algorithm that usually requires 0(n 3) time, in the worst case, to cluster the objects. An improved algorithm (Anderberg 1973), while still requiring 0(n 3) worst-case time, can reasonably be expected to exhibit 0(n 2) expected behavior. By contrast, we describe a SAHN clustering algorithm that requires 0(n 2 logn) time in the worst case. When SAHN clustering methods exhibit reasonable space distortion properties, further improvements are possible. We adapt a SAHN clustering algorithm, based on the efficient construction of nearest neighbor chains, to obtain a reasonably general SAHN clustering algorithm that requires in the worst case 0(n 2) time and space.Whenevern objects are characterized byk-tuples of real numbers, they may be clustered by any of a family of centroid SAHN clustering methods. These methods are based on a geometric model in which clusters are represented by points ink-dimensional real space and points being agglomerated are replaced by a single (centroid) point. For this model, we have solved a class of special packing problems involving point-symmetric convex objects and have exploited it to design an efficient centroid clustering algorithm. Specifically, we describe a centroid SAHN clustering algorithm that requires 0(n 2) time, in the worst case, for fixedk and for a family of dissimilarity measures including the Manhattan, Euclidean, Chebychev and all other Minkowski metrics.This work was partially supported by the Natural Sciences and Engineering Research Council of Canada and by the Austrian Fonds zur Förderung der wissenschaftlichen Forschung.  相似文献   

10.
Measurements of p variables for n samples are collected into a n×p matrix X, where the samples belong to one of k groups. The group means are separated by Mahalanobis distances. CVA optimally represents the group means of X in an r-dimensional space. This can be done by maximizing a ratio criterion (basically one- dimensional) or, more flexibly, by minimizing a rank-constrained least-squares fitting criterion (which is not confined to being one-dimensional but depends on defining an appropriate Mahalanobis metric). In modern n < p problems, where W is not of full rank, the ratio criterion is shown not to be coherent but the fit criterion, with an attention to associated metrics, readily generalizes. In this context we give a unified generalization of CVA, introducing two metrics, one in the range space of W and the other in the null space of W, that have links with Mahalanobis distance. This generalization is computationally efficient, since it requires only the spectral decomposition of a n×n matrix.  相似文献   

11.
Let \( \mathcal{G} \) = (G,w) be a weighted simple finite connected graph, that is, let G be a simple finite connected graph endowed with a function w from the set of the edges of G to the set of real numbers. For any subgraph G′ of G, we define w(G′) to be the sum of the weights of the edges of G′. For any i, j vertices of G, we define D {i,j}(\( \mathcal{G} \)) to be the minimum of the weights of the simple paths of G joining i and j. The D {i,j}(\( \mathcal{G} \)) are called 2-weights of \( \mathcal{G} \). Weighted graphs and their reconstruction from 2-weights have applications in several disciplines, such as biology and psychology.Let \( {\left\{{m}_I\right\}}_{I\in \left(\frac{\left\{1,\dots, n\right\}}{2}\right)} \) and \( {\left\{{M}_I\right\}}_{I\in \left(\frac{\left\{1,\dots, n\right\}}{2}\right)} \) be two families of positive real numbers parametrized by the 2-subsets of {1, …, n} with m I M I for any I; we study when there exist a positive-weighted graph G and an n-subset {1, …, n} of the set of its vertices such that D I (\( \mathcal{G} \)) ∈ [m I ,M I ] for any \( I\in \left(\frac{\left\{1,\dots, n\right\}}{2}\right) \). Then we study the analogous problem for trees, both in the case of positive weights and in the case of general weights.  相似文献   

12.
We present a new distance based quartet method for phylogenetic tree reconstruction, called Minimum Tree Cost Quartet Puzzling. Starting from a distance matrix computed from natural data, the algorithm incrementally constructs a tree by adding one taxon at a time to the intermediary tree using a cost function based on the relaxed 4-point condition for weighting quartets. Different input orders of taxa lead to trees having distinct topologies which can be evaluated using a maximum likelihood or weighted least squares optimality criterion. Using reduced sets of quartets and a simple heuristic tree search strategy we obtain an overall complexity of O(n 5 log2 n) for the algorithm. We evaluate the performances of the method through comparative tests and show that our method outperforms NJ when a weighted least squares optimality criterion is employed. We also discuss the theoretical boundaries of the algorithm.  相似文献   

13.
Clustering with a criterion which minimizes the sum of squared distances to cluster centroids is usually done in a heuristic way. An exact polynomial algorithm, with a complexity in O(N p+1 logN), is proposed for minimum sum of squares hierarchical divisive clustering of points in a p-dimensional space with small p. Empirical complexity is one order of magnitude lower. Data sets with N = 20000 for p = 2, N = 1000 for p = 3, and N = 200 for p = 4 are clustered in a reasonable computing time.  相似文献   

14.
Consider N entities to be classified (e.g., geographical areas), a matrix of dissimilarities between pairs of entities, a graph H with vertices associated with these entities such that the edges join the vertices corresponding to contiguous entities. The split of a cluster is the smallest dissimilarity between an entity of this cluster and an entity outside of it. The single-linkage algorithm (ignoring contiguity between entities) provides partitions into M clusters for which the smallest split of the clusters, called split of the partition, is maximum. We study here the partitioning of the set of entities into M connected clusters for all M between N - 1 and 2 (i.e., clusters such that the subgraphs of H induced by their corresponding sets of entities are connected) with maximum split subject to that condition. We first provide an exact algorithm with a (N2) complexity for the particular case in which H is a tree. This algorithm suggests in turn a first heuristic algorithm for the general problem. Several variants of this heuristic are Also explored. We then present an exact algorithm for the general case based on iterative determination of cocycles of subtrees and on the solution of auxiliary set covering problems. As solution of the latter problems is time-consuming for large instances, we provide another heuristic in which the auxiliary set covering problems are solved approximately. Computational results obtained with the exact and heuristic algorithms are presented on test problems from the literature.  相似文献   

15.
The formalism of abstracted quantum mechanics is applied in a model of the generalized Liar Paradox. Here, the Liar Paradox, a consistently testable configuration of logical truth properties, is considered a dynamic conceptual entity in the cognitive sphere (Aerts, Broekaert, &; Smets, [Foundations of Science 1999, 4, 115–132; International Journal of Theoretical Physics, 2000, 38, 3231–3239]; Aerts and colleagues[Dialogue in Psychology, 1999, 10; Proceedings of Fundamental Approachs to Consciousness, Tokyo ’99; Mind in Interaction]. Basically, the intrinsic contextuality of the truth-value of the Liar Paradox is appropriately covered by the abstracted quantum mechanical approach. The formal details of the model are explicited here for the generalized case. We prove the possibility of constructing a quantum model of the m-sentence generalizations of the Liar Paradox. This includes (i) the truth–falsehood state of the m-Liar Paradox can be represented by an embedded 2m-dimensional quantum vector in a (2m) m -dimensional complex Hilbert space, with cognitive interactions corresponding to projections, (ii) the construction of a continuous ‘time’ dynamics is possible: typical truth and falsehood value oscillations are described by Schrödinger evolution, (iii) Kirchoff and von Neumann axioms are satisfied by introduction of ‘truth-value by inference’ projectors, (iv) time invariance of unmeasured state.  相似文献   

16.
Aggregation of equivalence relations   总被引:1,自引:1,他引:0  
Each ofn attributes partitions a set of items into equivalence classes. Aconsistent aggregator of then partitions is defined as an aggregate partition that satisfies an independence condition and a unanimity condition. It is shown that the class of consistent aggregators is precisely the class ofconjunctive aggregators. That is, for each consistent aggregator there is a nonempty subsetN of the attributes such that two items are equivalent in the aggregate partition if and only if they are equivalent with respect to each attribute inN.  相似文献   

17.
Founding our analysis on the Geneva-Brussels approach to the foundations of physics, we provide a clarification and classification of the key concept of observation. An entity can be observed with or without a scope. In the second case, the observation is a purely non-invasive discovery process; in the first case, it is a purely invasive process, which can involve either creation or destruction aspects. An entity can also be observed with or without a full control over the observational process. In the latter case, the observation can be described by a symmetry breaking mechanism, through which a specific deterministic observational process is selected among a number of potential ones, as explained in Aerts’ hidden measurement approach. This is what is called a product test, or product observation, whose consequences are that outcomes can only be predicted in probabilistic terms, as it is the case in typical quantum measurements. We also show that observations can be about intrinsic (stable) properties of the observed entity, or about relational (ephemeral) properties between the observer and observed entities; also, they can be about intermediate properties, neither purely classical, nor purely quantum. Our analysis allows us to propose a general conceptual characterization of quantum measurements, as observational processes involving three aspects: (1) product observations, (2) pure creation aspects and (3) ephemeral relational properties. We also discuss the important concept of non-spatiality and emphasize some of the differences and similarities between quantum and classical/relativistic observations.  相似文献   

18.
X is the automatic hierarchical classification of one mode (units or variables or occasions) of X on the basis of the other two. In this paper the case of OMC of units according to variables and occasions is discussed. OMC is the synthesis of a set of hierarchical classifications Delta obtained from X; e.g., the OMC of units is the consensus (synthesis) among the set of dendograms individually defined by clustering units on the basis of variables, separately for each given occasion of X. However, because Delta is often formed by a large number of classifications, it may be unrealistic that a single synthesis is representative of the entire set. In this case, subsets of similar (homegeneous) dendograms may be found in Delta so that a consensus representative of each subset may be identified. This paper proposes, PARtition and Least Squares Consensus cLassifications Analysis (PARLSCLA) of a set of r hierarchical classifications Delta. PARLSCLA identifies the best least-squares partition of Delta into m (1 <= m <= r) subsets of homogeneous dendograms and simultaneously detects the closest consensus classification (a median classification called Least Squares Consensus Dendogram (LSCD) for each subset. PARLSCLA is a generalization of the problem to find a least-squares consensus dendogram for Delta. PARLSCLA is formalized as a mixed-integer programming problem and solved with an iterative, two-step algorithm. The method proposed is applied to an empirical data set.  相似文献   

19.
Dendrograms used in data analysis are ultrametric spaces, hence objects of nonarchimedean geometry. It is known that there exist p-adic representations of dendrograms. Completed by a point at infinity, they can be viewed as subtrees of the Bruhat-Tits tree associated to the p-adic projective line. The implications are that certain moduli spaces known in algebraic geometry are in fact p-adic parameter spaces of dendrograms, and stochastic classification can also be handled within this framework. At the end, we calculate the topology of the hidden part of a dendrogram.  相似文献   

20.
The theory of the tight span, a cell complex that can be associated to every metric D, offers a unifying view on existing approaches for analyzing distance data, in particular for decomposing a metric D into a sum of simpler metrics as well as for representing it by certain specific edge-weighted graphs, often referred to as realizations of D. Many of these approaches involve the explicit or implicit computation of the so-called cutpoints of (the tight span of) D, such as the algorithm for computing the “building blocks” of optimal realizations of D recently presented by A. Hertz and S. Varone. The main result of this paper is an algorithm for computing the set of these cutpoints for a metric D on a finite set with n elements in O(n3) time. As a direct consequence, this improves the run time of the aforementioned O(n6)-algorithm by Hertz and Varone by “three orders of magnitude”.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号