首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 222 毫秒
1.
Measurements of p variables for n samples are collected into a n×p matrix X, where the samples belong to one of k groups. The group means are separated by Mahalanobis distances. CVA optimally represents the group means of X in an r-dimensional space. This can be done by maximizing a ratio criterion (basically one- dimensional) or, more flexibly, by minimizing a rank-constrained least-squares fitting criterion (which is not confined to being one-dimensional but depends on defining an appropriate Mahalanobis metric). In modern n < p problems, where W is not of full rank, the ratio criterion is shown not to be coherent but the fit criterion, with an attention to associated metrics, readily generalizes. In this context we give a unified generalization of CVA, introducing two metrics, one in the range space of W and the other in the null space of W, that have links with Mahalanobis distance. This generalization is computationally efficient, since it requires only the spectral decomposition of a n×n matrix.  相似文献   

2.
Reduced K-means (RKM) and Factorial K-means (FKM) are two data reduction techniques incorporating principal component analysis and K-means into a unified methodology to obtain a reduced set of components for variables and an optimal partition for objects. RKM finds clusters in a reduced space by maximizing the between-clusters deviance without imposing any condition on the within-clusters deviance, so that clusters are isolated but they might be heterogeneous. On the other hand, FKM identifies clusters in a reduced space by minimizing the within-clusters deviance without imposing any condition on the between-clusters deviance. Thus, clusters are homogeneous, but they might not be isolated. The two techniques give different results because the total deviance in the reduced space for the two methodologies is not constant; hence the minimization of the within-clusters deviance is not equivalent to the maximization of the between-clusters deviance. In this paper a modification of the two techniques is introduced to avoid the afore mentioned weaknesses. It is shown that the two modified methods give the same results, thus merging RKM and FKM into a new methodology. It is called Factor Discriminant K-means (FDKM), because it combines Linear Discriminant Analysis and K-means. The paper examines several theoretical properties of FDKM and its performances with a simulation study. An application on real-world data is presented to show the features of FDKM.  相似文献   

3.
Between-Group Metrics   总被引:1,自引:1,他引:0  
In canonical analysis with more variables than samples, it is shown that, as well as the usual canonical means in the range-space of the within-groups dispersion matrix, canonical means may be defined in its null space. In the range space we have the usual Mahalanobis metric; in the null space explicit expressions are given and interpreted for a new metric.  相似文献   

4.
NJ by K that represents N individuals' choices among K categories over J time points. The row and column scores of this univariate data matrix cannot be chosen uniquely by any standard optimal scaling technique. To approach this difficulty, we present a regularized method, in which the scores of individuals over time points (i.e. row scores) are represented using natural cubic splines. The loss of their smoothness is combined with the loss of homeogeneity underlying the standard technique to form a penalized loss function which is minimized under a normalization constraint. A graphical representation of the resulting scores allows us easily to grasp the longitudinal changes in individuals. Simulation analysis is performed to evaluate how well the method recovers true scores, and real data are analyzed for illustration.  相似文献   

5.
In this paper I offer a fresh interpretation of Leibniz’s theory of space, in which I explain the connection of his relational theory to both his mathematical theory of analysis situs and his theory of substance. I argue that the elements of his mature theory are not bare bodies (as on a standard relationalist view) nor bare points (as on an absolutist view), but situations. Regarded as an accident of an individual body, a situation is the complex of its angles and distances to other co-existing bodies, founded in the representation or state of the substance or substances contained in the body. The complex of all such mutually compatible situations of co-existing bodies constitutes an order of situations, or instantaneous space. Because these relations of situation change from one instant to another, space is an accidental whole that is continuously changing and becoming something different, and therefore a phenomenon. As Leibniz explains to Clarke, it can be represented mathematically by supposing some set of existents hypothetically (and counterfactually) to remain in a fixed mutual relation of situation, and gauging all subsequent situations in terms of transformations with respect to this initial set. Space conceived in terms of such allowable transformations is the subject of Analysis Situs. Finally, insofar as space is conceived in abstraction from any bodies that might individuate the situations, it encompasses all possible relations of situation. This abstract space, the order of all possible situations, is an abstract entity, and therefore ideal.  相似文献   

6.
It is common practice to perform a principal component analysis (PCA) on a correlation matrix to represent graphically the relations among numerous variables. In such a situation, the variables may be considered as points on the unit hypersphere of an Euclidean space, and PCA provides a sort of best fit of these points within a subspace. Taking into account their particular position, this paper suggests to represent the variables on an optimal three-dimensional unit sphere.
Résumé Il est classique d'utiliser une analyse en composantes principales pour représenter graphiquement une matrice de corrélation. Dans une telle situation, les variables peuvent être considérées comme des points sur l'hypersphère unité d'un espace Euclidien, et l'analyse en composantes principales permet d'obtenir une bonne approximation de ces points à l'aide d'un sous-espace Euclidien. Prenant en compte une telle situation géométrique, le présent article suggère de représenter les variables sur une sphère tri-dimensionelle optimale.
  相似文献   

7.
The additive biclustering model for two-way two-mode object by variable data implies overlapping clusterings of both the objects and the variables together with a weight for each bicluster (i.e., a pair of an object and a variable cluster). In the data analysis, an additive biclustering model is fitted to given data by means of minimizing a least squares loss function. To this end, two alternating least squares algorithms (ALS) may be used: (1) PENCLUS, and (2) Baier’s ALS approach. However, both algorithms suffer from some inherent limitations, which may hamper their performance. As a way out, based on theoretical results regarding optimally designing ALS algorithms, in this paper a new ALS algorithm will be presented. In a simulation study this algorithm will be shown to outperform the existing ALS approaches.  相似文献   

8.
To reveal the structure underlying two-way two-mode object by variable data, Mirkin (1987) has proposed an additive overlapping clustering model. This model implies an overlapping clustering of the objects and a reconstruction of the data, with the reconstructed variable profile of an object being a summation of the variable profiles of the clusters it belongs to. Grasping the additive (overlapping) clustering structure of object by variable data may, however, be seriously hampered in case the data include a very large number of variables. To deal with this problem, we propose a new model that simultaneously clusters the objects in overlapping clusters and reduces the variable space; as such, the model implies that the cluster profiles and, hence, the reconstructed data profiles are constrained to lie in a lowdimensional space. An alternating least squares (ALS) algorithm to fit the new model to a given data set will be presented, along with a simulation study and an illustrative example that makes use of empirical data.  相似文献   

9.
K -means partitioning. We also describe some new features and improvements to the algorithm proposed by De Soete. Monte Carlo simulations have been conducted using different error conditions. In all cases (i.e., ultrametric or additive trees, or K-means partitioning), the simulation results indicate that the optimal weighting procedure should be used for analyzing data containing noisy variables that do not contribute relevant information to the classification structure. However, if the data involve error-perturbed variables that are relevant to the classification or outliers, it seems better to cluster or partition the entities by using variables with equal weights. A new computer program, OVW, which is available to researchers as freeware, implements improved algorithms for optimal variable weighting for ultrametric and additive tree clustering, and includes a new algorithm for optimal variable weighting for K-means partitioning.  相似文献   

10.
Block-Relaxation Approaches for Fitting the INDCLUS Model   总被引:1,自引:1,他引:0  
A well-known clustering model to represent I?×?I?×?J data blocks, the J frontal slices of which consist of I?×?I object by object similarity matrices, is the INDCLUS model. This model implies a grouping of the I objects into a prespecified number of overlapping clusters, with each cluster having a slice-specific positive weight. An INDCLUS model is fitted to a given data set by means of minimizing a least squares loss function. The minimization of this loss function has appeared to be a difficult problem for which several algorithmic strategies have been proposed. At present, the best available option seems to be the SYMPRES algorithm, which minimizes the loss function by means of a block-relaxation algorithm. Yet, SYMPRES is conjectured to suffer from a severe local optima problem. As a way out, based on theoretical results with respect to optimally designing block-relaxation algorithms, five alternative block-relaxation algorithms are proposed. In a simulation study it appears that the alternative algorithms with overlapping parameter subsets perform best and clearly outperform SYMPRES in terms of optimization performance and cluster recovery.  相似文献   

11.
Clustering with a criterion which minimizes the sum of squared distances to cluster centroids is usually done in a heuristic way. An exact polynomial algorithm, with a complexity in O(N p+1 logN), is proposed for minimum sum of squares hierarchical divisive clustering of points in a p-dimensional space with small p. Empirical complexity is one order of magnitude lower. Data sets with N = 20000 for p = 2, N = 1000 for p = 3, and N = 200 for p = 4 are clustered in a reasonable computing time.  相似文献   

12.
Cross-sectional approach for clustering time varying data   总被引:2,自引:0,他引:2  
Cluster analysis is to be performed on a three-mode data matrix of type: units, variables, time. A general model for calculating the distance between two units varying in time is proposed. One particular model is developed and used in an example concerned with clustering of 23 European countries according to the similarity of energy consumption in the years 1976–1982.Supported in part by the Research Council of Slovenia, Yugoslavia.  相似文献   

13.
In this paper, we consider several generalizations of the popular Ward’s method for agglomerative hierarchical clustering. Our work was motivated by clustering software, such as the R function hclust, which accepts a distance matrix as input and applies Ward’s definition of inter-cluster distance to produce a clustering. The standard version of Ward’s method uses squared Euclidean distance to form the distance matrix. We explore the effect on the clustering of using other definitions of distance, such as the Minkowski distance.  相似文献   

14.
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, our approach leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots, we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.  相似文献   

15.
Analysis of between-group differences using canonical variates assumes equality of population covariance matrices. Sometimes these matrices are sufficiently different for the null hypothesis of equality to be rejected, but there exist some common features which should be exploited in any analysis. The common principal component model is often suitable in such circumstances, and this model is shown to be appropriate in a practical example. Two methods for between-group analysis are proposed when this model replaces the equal dispersion matrix assumption. One method is by extension of the two-stage approach to canonical variate analysis using sequential principal component analyses as described by Campbell and Atchley (1981). The second method is by definition of a distance function between populations satisfying the common principal component model, followed by metric scaling of the resulting between-populations distance matrix. The two methods are compared with each other and with ordinary canonical variate analysis on the previously introduced data set.  相似文献   

16.
By associating a whole distance matrix with a single point in a parameter space, a family of matrices (e.g., all those obeying the triangle inequality) can be shown as a cloud of points. Pictures of the cloud form a family portrait, and its characteristic shape and interrelationship with the portraits of other families can be explored. Critchley (unpublished) used this approach to illustrate, for distances between three points, algebraic results on the nesting relations between various metrics. In this paper, these diagrams are further investigated and then generalized. In the first generalization, projective geometry is used to allow the geometric representation of Additive Mixture, Additive Constant, and Missing Data problems. Then the six-dimensional portraits of four-point distance matrices are studied, revealing differences between the Euclidean, Additive Tree, and General Metric families. The paper concludes with caveats and insights concerning families of generaln-point metric matrices.  相似文献   

17.
We present a hierarchical classification based on n-ary relations of the entities. Starting from the finest partition that can be obtained from the attributes, we distinguish between entities having the same attributes by using relations between entities. The classification that we get is thus a refinement of this finest partition. It can be computed in O(n + m 2) space and O(n · p · m 5/2) time, where n is the number of entities, p the number of classes of the resulting hierarchy (p is the size of the output; p < 2n) and m the maximum number of relations an entity can have (usually, m ? n). So we can treat sets with millions of entities.  相似文献   

18.
A natural extension of classical metric multidimensional scaling is proposed. The result is a new formulation of nonmetric multidimensional scaling in which the strain criterion is minimized subject to order constraints on the disparity variables. Innovative features of the new formulation include: the parametrization of the p-dimensional distance matrices by the positive semidefinite matrices of rank ≤p; optimization of the (squared) disparity variables, rather than the configuration coordinate variables; and a new nondegeneracy constraint, which restricts the set of (squared) disparities rather than the set of distances. Solutions are obtained using an easily implemented gradient projection method for numerical optimization. The method is applied to two published data sets.  相似文献   

19.
Graphical displays which show inter-sample distances are important for the interpretation and presentation of multivariate data. Except when the displays are two-dimensional, however, they are often difficult to visualize as a whole. A device, based on multidimensional unfolding, is described for presenting some intrinsically high-dimensional displays in fewer, usually two, dimensions. This goal is achieved by representing each sample by a pair of points, sayR i andr i, so that a theoretical distance between thei-th andj-th samples is represented twice, once by the distance betweenR i andr j and once by the distance betweenR j andr i. Selfdistances betweenR i andr i need not be zero. The mathematical conditions for unfolding to exhibit symmetry are established. Algorithms for finding approximate fits, not constrained to be symmetric, are discussed and some examples are given.  相似文献   

20.
In this study, we consider the type of interval data summarizing the original samples (individuals) with classical point data. This type of interval data are termed interval symbolic data in a new research domain called, symbolic data analysis. Most of the existing research, such as the (centre, radius) and [lower boundary, upper boundary] representations, represent an interval using only the boundaries of the interval. However, these representations hold true only under the assumption that the individuals contained in the interval follow a uniform distribution. In practice, such representations may result in not only inconsistency with the facts, since the individuals are usually not uniformly distributed in many application aspects, but also information loss for not considering the point data within the intervals during the calculation. In this study, we propose a new representation of the interval symbolic data considering the point data contained in the intervals. Then we apply the city-block distance metric to the new representation and propose a dynamic clustering approach for interval symbolic data. A simulation experiment is conducted to evaluate the performance of our method. The results show that, when the individuals contained in the interval do not follow a uniform distribution, the proposed method significantly outperforms the Hausdorff and city-block distance based on traditional representation in the context of dynamic clustering. Finally, we give an application example on the automobile data set.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号