首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 756 毫秒
1.
Graphical displays which show inter-sample distances are important for the interpretation and presentation of multivariate data. Except when the displays are two-dimensional, however, they are often difficult to visualize as a whole. A device, based on multidimensional unfolding, is described for presenting some intrinsically high-dimensional displays in fewer, usually two, dimensions. This goal is achieved by representing each sample by a pair of points, sayR i andr i, so that a theoretical distance between thei-th andj-th samples is represented twice, once by the distance betweenR i andr j and once by the distance betweenR j andr i. Selfdistances betweenR i andr i need not be zero. The mathematical conditions for unfolding to exhibit symmetry are established. Algorithms for finding approximate fits, not constrained to be symmetric, are discussed and some examples are given.  相似文献   

2.
The median procedure for n-trees   总被引:2,自引:2,他引:0  
Let (X,d) be a metric space The functionM:X k 2 x defined by is the minimum } is called themedian procedure and has been found useful in various applications involving the notion of consensus Here we present axioms that characterizeM whenX is a certain class of trees (hierarchical classifications), andd is the symmetric difference metricWe would like to thank the referees and Editor for helpful comments  相似文献   

3.
In this paper, we establish that the following fitting problem is NP-hard: given a finite set X and a dissimilarity measure d on X (d is a symmetric function from X 2 to the nonnegative real numbers and vanishing on the diagonal), we wish to find a Robinsonian dissimilarity d R on X minimizing the l -error ||d − d R || = maxx,y ∈X{|d(x, y) − d R (x, y)|} between d and d R . Recall that a dissimilarity d R on X is called monotone (or Robinsonian) if there exists a total order ≺ on X such that xzy implies that d(x, y) ≥ max{d(x, z), d(z, y)}. The Robinsonian dissimilarities appear in seriation and clustering problems, in sparse matrix ordering and DNA sequencing.  相似文献   

4.
Givenk rooted binary treesA 1, A2, ..., Ak, with labeled leaves, we generateC, a unique system of lineage constraints on common ancestors. We then present an algorithm for constructing the set of rooted binary treesB, compatible with all ofA 1, A2, ..., Ak. The running time to obtain one such supertree isO(k 2 n2), wheren is the number of distinct leaves in all of the treesA 1, A2, ..., Ak.  相似文献   

5.
Optimal algorithms for comparing trees with labeled leaves   总被引:2,自引:1,他引:1  
LetR n denote the set of rooted trees withn leaves in which: the leaves are labeled by the integers in {1, ...,n}; and among interior vertices only the root may have degree two. Associated with each interior vertexv in such a tree is the subset, orcluster, of leaf labels in the subtree rooted atv. Cluster {1, ...,n} is calledtrivial. Clusters are used in quantitative measures of similarity, dissimilarity and consensus among trees. For anyk trees inR n , thestrict consensus tree C(T 1, ...,T k ) is that tree inR n containing exactly those clusters common to every one of thek trees. Similarity between treesT 1 andT 2 inR n is measured by the numberS(T 1,T 2) of nontrivial clusters in bothT 1 andT 2; dissimilarity, by the numberD(T 1,T 2) of clusters inT 1 orT 2 but not in both. Algorithms are known to computeC(T 1, ...,T k ) inO(kn 2) time, andS(T 1,T 2) andD(T 1,T 2) inO(n 2) time. I propose a special representation of the clusters of any treeT R n , one that permits testing in constant time whether a given cluster exists inT. I describe algorithms that exploit this representation to computeC(T 1, ...,T k ) inO(kn) time, andS(T 1,T 2) andD(T 1,T 2) inO(n) time. These algorithms are optimal in a technical sense. They enable well-known indices of consensus between two trees to be computed inO(n) time. All these results apply as well to comparable problems involving unrooted trees with labeled leaves.The Natural Sciences and Engineering Research Council of Canada partially supported this work with grant A-4142.  相似文献   

6.
Efficient algorithms for agglomerative hierarchical clustering methods   总被引:11,自引:4,他引:7  
Whenevern objects are characterized by a matrix of pairwise dissimilarities, they may be clustered by any of a number of sequential, agglomerative, hierarchical, nonoverlapping (SAHN) clustering methods. These SAHN clustering methods are defined by a paradigmatic algorithm that usually requires 0(n 3) time, in the worst case, to cluster the objects. An improved algorithm (Anderberg 1973), while still requiring 0(n 3) worst-case time, can reasonably be expected to exhibit 0(n 2) expected behavior. By contrast, we describe a SAHN clustering algorithm that requires 0(n 2 logn) time in the worst case. When SAHN clustering methods exhibit reasonable space distortion properties, further improvements are possible. We adapt a SAHN clustering algorithm, based on the efficient construction of nearest neighbor chains, to obtain a reasonably general SAHN clustering algorithm that requires in the worst case 0(n 2) time and space.Whenevern objects are characterized byk-tuples of real numbers, they may be clustered by any of a family of centroid SAHN clustering methods. These methods are based on a geometric model in which clusters are represented by points ink-dimensional real space and points being agglomerated are replaced by a single (centroid) point. For this model, we have solved a class of special packing problems involving point-symmetric convex objects and have exploited it to design an efficient centroid clustering algorithm. Specifically, we describe a centroid SAHN clustering algorithm that requires 0(n 2) time, in the worst case, for fixedk and for a family of dissimilarity measures including the Manhattan, Euclidean, Chebychev and all other Minkowski metrics.This work was partially supported by the Natural Sciences and Engineering Research Council of Canada and by the Austrian Fonds zur Förderung der wissenschaftlichen Forschung.  相似文献   

7.
Clustering with a criterion which minimizes the sum of squared distances to cluster centroids is usually done in a heuristic way. An exact polynomial algorithm, with a complexity in O(N p+1 logN), is proposed for minimum sum of squares hierarchical divisive clustering of points in a p-dimensional space with small p. Empirical complexity is one order of magnitude lower. Data sets with N = 20000 for p = 2, N = 1000 for p = 3, and N = 200 for p = 4 are clustered in a reasonable computing time.  相似文献   

8.
X is the automatic hierarchical classification of one mode (units or variables or occasions) of X on the basis of the other two. In this paper the case of OMC of units according to variables and occasions is discussed. OMC is the synthesis of a set of hierarchical classifications Delta obtained from X; e.g., the OMC of units is the consensus (synthesis) among the set of dendograms individually defined by clustering units on the basis of variables, separately for each given occasion of X. However, because Delta is often formed by a large number of classifications, it may be unrealistic that a single synthesis is representative of the entire set. In this case, subsets of similar (homegeneous) dendograms may be found in Delta so that a consensus representative of each subset may be identified. This paper proposes, PARtition and Least Squares Consensus cLassifications Analysis (PARLSCLA) of a set of r hierarchical classifications Delta. PARLSCLA identifies the best least-squares partition of Delta into m (1 <= m <= r) subsets of homogeneous dendograms and simultaneously detects the closest consensus classification (a median classification called Least Squares Consensus Dendogram (LSCD) for each subset. PARLSCLA is a generalization of the problem to find a least-squares consensus dendogram for Delta. PARLSCLA is formalized as a mixed-integer programming problem and solved with an iterative, two-step algorithm. The method proposed is applied to an empirical data set.  相似文献   

9.
NP-hard Approximation Problems in Overlapping Clustering   总被引:1,自引:1,他引:0  
Lp -norm (p < ∞). These problems also correspond to the approximation by a strongly Robinson dissimilarity or by a dissimilarity fulfilling the four-point inequality (Bandelt 1992; Diatta and Fichet 1994). The results are extended to circular strongly Robinson dissimilarities, indexed k-hierarchies (Jardine and Sibson 1971, pp. 65-71), and to proper dissimilarities satisfying the Bertrand and Janowitz (k + 2)-point inequality (Bertrand and Janowitz 1999). Unidimensional scaling (linear or circular) is reinterpreted as a clustering problem and its hardness is established, but only for the L 1 norm.  相似文献   

10.
n-Way Metrics     
We study a family of n-way metrics that generalize the usual two-way metric. The n-way metrics are totally symmetric maps from E n into \mathbbR \geqslant 0 {\mathbb{R}_{ \geqslant 0}} . The three-way metrics introduced by Joly and Le Calvé (1995) and Heiser and Bennani (1997) and the n-way metrics studied in Deza and Rosenberg (2000) belong to this family. It is shown how the n-way metrics and n-way distance measures are related to (n − 1)-way metrics, respectively, (n − 1)-way distance measures.  相似文献   

11.
Suppose y, a d-dimensional (d ≥ 1) vector, is drawn from a mixture of k (k ≥ 2) populations, given by ∏1, ∏2,…,∏ k . We wish to identify the population that is the most likely source of the point y. To solve this classification problem many classification rules have been proposed in the literature. In this study, a new nonparametric classifier based on the transvariation probabilities of data depth is proposed. We compare the performance of the newly proposed nonparametric classifier with classical and maximum depth classifiers using some benchmark and simulated data sets. The authors thank the editor and referees for comments that led to an improvement of this paper. This work is partially supported by the National Science Foundation under Grant No. DMS-0604726. Published online xx, xx, xxxx.  相似文献   

12.
A dissimilarity D on a finite set S is said to be Robinsonian if S can be totally ordered in such a way that, for every i < j < k, D (i, j) ≤ D (i, k) and D (j, k) ≤ D (i, k). Intuitively, D is Robinsonian if S can be represented by points on a line. Recognizing Robinsonian dissimilarities has many applications in seriation and classification. In this paper, we present an optimal O (n 2) algorithm to recognize Robinsonian dissimilarities, where n is the cardinal of S. Our result improves the already known algorithms.  相似文献   

13.
p similarity function, the L p -transform and the Minkowski-p distance. For triadic distance models defined by the L p -transform we will prove that they do not model three-way association. Moreover, triadic distance models defined by the L p -transform are restricted multiple dyadic distances, where each dyadic distance is defined for a two-way margin of the three-way table. Distance models for three-way two-mode data, called three-way distance models, do succeed in modeling three-way association.  相似文献   

14.
A cluster diagram is a rooted planar tree that depicts the hierarchical agglomeration of objects into groups of increasing size. On the null hypothesis that at each stage of the clustering procedure all possible joins are equally probable, we derive the probability distributions for two properties of these diagrams: (1)S, the number of single objects previously ungrouped that are joined in the final stages of clustering, and (2)m k, the number of groups ofk+1 objects that are formed during the process. Ecological applications of statistical tests for these properties are described and illustrated with data from weed communities of Saskatchewan fields.This work was supported by the Natural Sciences and Engineering Research Council of Canada.  相似文献   

15.
In this paper we discuss two approaches to the axiomatization of scientific theories in the context of the so called semantic approach, according to which (roughly) a theory can be seen as a class of models. The two approaches are associated respectively to Suppes’ and to da Costa and Chuaqui’s works. We argue that theories can be developed both in a way more akin to the usual mathematical practice (Suppes), in an informal set theoretical environment, writing the set theoretical predicate in the language of set theory itself or, more rigorously (da Costa and Chuaqui), by employing formal languages that help us in writing the postulates to define a class of structures. Both approaches are called internal, for we work within a mathematical framework, here taken to be first-order ZFC. We contrast these approaches with an external one, here discussed briefly. We argue that each one has its strong and weak points, whose discussion is relevant for the philosophical foundations of science.  相似文献   

16.
We present an O(n 3)-time, O(n 2)-space algorithm to test whether a dissimilarity d on an n-object set X is Robinsonian, i.e., X admits an ordering such that i≤j≤k implies that d(x i,xk)≥max {d(xi,xj),d(xj,xk)}.  相似文献   

17.
18.
Optimization Strategies for Two-Mode Partitioning   总被引:2,自引:2,他引:0  
Two-mode partitioning is a relatively new form of clustering that clusters both rows and columns of a data matrix. In this paper, we consider deterministic two-mode partitioning methods in which a criterion similar to k-means is optimized. A variety of optimization methods have been proposed for this type of problem. However, it is still unclear which method should be used, as various methods may lead to non-global optima. This paper reviews and compares several optimization methods for two-mode partitioning. Several known methods are discussed, and a new fuzzy steps method is introduced. The fuzzy steps method is based on the fuzzy c-means algorithm of Bezdek (1981) and the fuzzy steps approach of Heiser and Groenen (1997) and Groenen and Jajuga (2001). The performances of all methods are compared in a large simulation study. In our simulations, a two-mode k-means optimization method most often gives the best results. Finally, an empirical data set is used to give a practical example of two-mode partitioning. We would like to thank two anonymous referees whose comments have improved the quality of this paper. We are also grateful to Peter Verhoef for providing the data set used in this paper.  相似文献   

19.
A random sample of sizeN is divided intok clusters that minimize the within clusters sum of squares locally. Some large sample properties of this k-means clustering method (ask approaches withN) are obtained. In one dimension, it is established that the sample k-means clusters are such that the within-cluster sums of squares are asymptotically equal, and that the sizes of the cluster intervals are inversely proportional to the one-third power of the underlying density at the midpoints of the intervals. The difficulty involved in generalizing the results to the multivariate case is mentioned.This research was supported in part by the National Science Foundation under Grant MCS75-08374. The author would like to thank John Hartigan and David Pollard for helpful discussions and comments.  相似文献   

20.
I consider a new problem of classification into n(n ≥ 2) disjoint classes based on features of unclassified data. It is assumed that the data are grouped into m(M ≥ n) disjoint sets and within each set the distribution of features is a mixture of distributions corresponding to particular classes. Moreover, the mixing proportions should be known and form a matrix of rank n. The idea of solution is, first, to estimate feature densities in all the groups, then to solve the linear system for component densities. The proposed classification method is asymptotically optimal, provided a consistent method of density estimation is used. For illustration, the method is applied to determining perfusion status in myocardial infarction patients, using creatine kinase measurements.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号