首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
A modified CANDECOMP algorithm is presented for fitting the metric version of the Extended INDSCAL model to three-way proximity data. The Extended INDSCAL model assumes, in addition to the common dimensions, a unique dimension for each object. The modified CANDECOMP algorithm fits the Extended INDSCAL model in a dimension-wise fashion and ensures that the subject weights for the common and the unique dimensions are nonnegative. A Monte Carlo study is reported to illustrate that the method is fairly insensitive to the choice of the initial parameter estimates. A second Monte Carlo study shows that the method is able to recover an underlying Extended INDSCAL structure if present in the data. Finally, the method is applied for illustrative purposes to some empirical data on pain relievers. In the final section, some other possible uses of the new method are discussed. Geert De Soete is supported as “Bevoegdverklaard Navorser” of the Belgian “Nationaal Fonds voor Wetenschappelijik Onderzoek”.  相似文献   

2.
An asymmetric multidimensional scaling model and an associated nonmetric algorithm to analyze two-mode three-way proximities (object × object × source) are introduced. The model consists of a common object configuration and two kinds of weights, i.e., for both symmetry and asymmetry. In the common object configuration, each object is represented by a point and a circle (sphere, hypersphere) in a Euclidean space. The common object configuration represents pairwise proximity relationships between pairs of objects for the ‘group’ of all sources. Each source has its own symmetry weight and a set of asymmetry weights. Symmetry weights represent individual differences among sources of data in symmetric proximity relationships, and asymmetry weights represent individual differences among sources in asymmetric proximity relationships. The associated nonmetric algorithm, based on Kruskal’s (1964b) nonmetric multidimensional scaling algorithm, is an extension of the algorithm for the asymmetric multidimensional scaling of one mode two-way proximities developed earlier (Okada and Imaizumi 1987). As an illustrative example, we analyze intergenerational occupational mobility from 1955 to 1985 in Japan among eight occupational categories.  相似文献   

3.
Graphical representation of nonsymmetric relationships data has usually proceeded via separate displays for the symmetric and the skew-symmetric parts of a data matrix. DEDICOM avoids splitting the data into symmetric and skewsymmetric parts, but lacks a graphical representation of the results. Chino's GIPSCAL combines features of both models, but may have a poor goodness-of-fit compared to DEDICOM. We simplify and generalize Chino's method in such a way that it fits the data better. We develop an alternating least squares algorithm for the resulting method, called Generalized GIPSCAL, and adjust it to handle GIPSCAL as well. In addition, we show that Generalized GIPSCAL is a constrained variant of DEDICOM and derive necessary and sufficient conditions for equivalence of the two models. Because these conditions are rather mild, we expect that in many practical cases DEDICOM and Generalized GIPSCAL are (nearly) equivalent, and hence that the graphical representation from Generalized GIPSCAL can be used to display the DEDICOM results graphically. Such a representation is given for an illustration. Finally, we show Generalized GIPSCAL to be a generalization of another method for joint representation of the symmetric and skew-symmetric parts of a data matrix.This research has been made possible by a fellowship from the Royal Netherlands Academy of Arts and Sciences to the first author, and by research grant number A6394 to the second author, from the Natural Sciences and Engineering Research Council of Canada. The authors are obliged to Jos ten Berge and Naohito Chino for stimulating comments.  相似文献   

4.
In this paper two alternative loss criteria for the least squares Procrustes problem are studied. These alternative criteria are based on the Huber function and on the more radical biweight function, which are designed to be resistant to outliers. Using iterative majorization it is shown how a convergent reweighted least squares algorithm can be developed. In asimulation study it turns out that the proposed methods perform well over a specific range of contamination. When a uniform dilation factor is included, mixed results are obtained. The methods also yield a set of weights that can be used for diagnostic purposes.  相似文献   

5.
In a real-world data set, there is always the possibility, rather high in our opinion, that different features may have different degrees of relevance. Most machine learning algorithms deal with this fact by either selecting or deselecting features in the data preprocessing phase. However, we maintain that even among relevant features there may be different degrees of relevance, and this should be taken into account during the clustering process.With over 50 years of history, K-Means is arguably the most popular partitional clustering algorithm there is. The first K-Means based clustering algorithm to compute feature weights was designed just over 30 years ago. Various such algorithms have been designed since but there has not been, to our knowledge, a survey integrating empirical evidence of cluster recovery ability, common flaws, and possible directions for future research. This paper elaborates on the concept of feature weighting and addresses these issues by critically analyzing some of the most popular, or innovative, feature weighting mechanisms based in K-Means.  相似文献   

6.
Traditionally latent class (LC) analysis is used by applied researchers as a tool for identifying substantively meaningful clusters. More recently, LC models have also been used as a density estimation tool for categorical variables. We introduce a divisive LC (DLC) model as a density estimation tool that may offer several advantages in comparison to a standard LC model. When using an LC model for density estimation, a considerable number of increasingly large LC models may have to be estimated before sufficient model-fit is achieved. A DLC model consists of a sequence of small LC models. Therefore, a DLC model can be estimated much faster and can easily utilize multiple processor cores, meaning that this model is more widely applicable and practical. In this study we describe the algorithm of fitting a DLC model, and discuss the various settings that indirectly influence the precision of a DLC model as a density estimation tool. These settings are illustrated using a synthetic data example, and the best performing algorithm is applied to a real-data example. The generated data example showed that, using specific decision rules, a DLC model is able to correctly model complex associations amongst categorical variables.  相似文献   

7.
Pruning a decision tree is considered by some researchers to be the most important part of tree building in noisy domains. While there are many approaches to pruning, the alternative of averaging over decision trees has not received as much attention. The basic idea of tree averaging is to produce a weighted sum of decisions. We consider the set of trees used for the averaging process, and how weights should be assigned to each tree in this set. We define the concept of afanned set for a tree, and examine how the Minimum Message Length paradigm of learning may be used to average over decision trees. We perform an empirical evaluation of two averaging approaches, and a Minimum Message Length approach.This work has been carried out with the support of the Defence Research Agency, Malvern.  相似文献   

8.
Mokken scale analysis uses an automated bottom-up stepwise item selection procedure that suffers from two problems. First, when selected during the procedure items satisfy the scaling conditions but they may fail to do so after the scale has been completed. Second, the procedure is approximate and thus may not produce the optimal item partitioning. This study investigates a variation on Mokken’s item selection procedure, which alleviates the first problem, and proposes a genetic algorithm, which alleviates both problems. The genetic algorithm is an approximation to checking all possible partitionings. A simulation study shows that the genetic algorithm leads to better scaling results than the other two procedures.  相似文献   

9.
We construct a weighted Euclidean distance that approximates any distance or dissimilarity measure between individuals that is based on a rectangular cases-by-variables data matrix. In contrast to regular multidimensional scaling methods for dissimilarity data, our approach leads to biplots of individuals and variables while preserving all the good properties of dimension-reduction methods that are based on the singular-value decomposition. The main benefits are the decomposition of variance into components along principal axes, which provide the numerical diagnostics known as contributions, and the estimation of nonnegative weights for each variable. The idea is inspired by the distance functions used in correspondence analysis and in principal component analysis of standardized data, where the normalizations inherent in the distances can be considered as differential weighting of the variables. In weighted Euclidean biplots, we allow these weights to be unknown parameters, which are estimated from the data to maximize the fit to the chosen distances or dissimilarities. These weights are estimated using a majorization algorithm. Once this extra weight-estimation step is accomplished, the procedure follows the classical path in decomposing the matrix and displaying its rows and columns in biplots.  相似文献   

10.
The use of Candecomp to fit scalar products in the context of Indscal is based on the assumption that, due to the symmetry of the data matrices involved, two components matrices will become equal when Candecomp converges. Bennani Dosse and Ten Berge (2008) have shown that, in the single component case, the assumption can only be violated at saddle points in the case of Gramian matrices. This paper again considers Candecomp applied to symmetric matrices, but with an orthonormality constraint on the components. This constrained version of Candecomp, when applied to symmetric matrices, has long been known under the acronym Indort. When the data matrices are positive definite, or have become positive semidefinite due to double centering, and the saliences are nonnegative – by chance or by constraint –, the component matrices resulting from Indort are shown to be equal. Because Indort is also free from so-called degeneracy problems, it is a highly attractive alternative to Candecomp in the present context. We also consider a well-known successive approach to the orthogonally constrained Indscal problem and we compare, from simulated and real data sets, its results with those given by the simultaneous (Indort) approach.  相似文献   

11.
Efficient algorithms for agglomerative hierarchical clustering methods   总被引:11,自引:4,他引:7  
Whenevern objects are characterized by a matrix of pairwise dissimilarities, they may be clustered by any of a number of sequential, agglomerative, hierarchical, nonoverlapping (SAHN) clustering methods. These SAHN clustering methods are defined by a paradigmatic algorithm that usually requires 0(n 3) time, in the worst case, to cluster the objects. An improved algorithm (Anderberg 1973), while still requiring 0(n 3) worst-case time, can reasonably be expected to exhibit 0(n 2) expected behavior. By contrast, we describe a SAHN clustering algorithm that requires 0(n 2 logn) time in the worst case. When SAHN clustering methods exhibit reasonable space distortion properties, further improvements are possible. We adapt a SAHN clustering algorithm, based on the efficient construction of nearest neighbor chains, to obtain a reasonably general SAHN clustering algorithm that requires in the worst case 0(n 2) time and space.Whenevern objects are characterized byk-tuples of real numbers, they may be clustered by any of a family of centroid SAHN clustering methods. These methods are based on a geometric model in which clusters are represented by points ink-dimensional real space and points being agglomerated are replaced by a single (centroid) point. For this model, we have solved a class of special packing problems involving point-symmetric convex objects and have exploited it to design an efficient centroid clustering algorithm. Specifically, we describe a centroid SAHN clustering algorithm that requires 0(n 2) time, in the worst case, for fixedk and for a family of dissimilarity measures including the Manhattan, Euclidean, Chebychev and all other Minkowski metrics.This work was partially supported by the Natural Sciences and Engineering Research Council of Canada and by the Austrian Fonds zur Förderung der wissenschaftlichen Forschung.  相似文献   

12.
The Meaning of Life in a Developing Universe   总被引:4,自引:4,他引:0  
The evolution of life on Earth has produced an organism that is beginning to model and understand its own evolution and the possible future evolution of life in the universe. These models and associated evidence show that evolution on Earth has a trajectory. The scale over which living processes are organized cooperatively has increased progressively, as has its evolvability. Recent theoretical advances raise the possibility that this trajectory is itself part of a wider developmental process. According to these theories, the developmental process has been shaped by a yet larger evolutionary dynamic that involves the reproduction of universes. This evolutionary dynamic has tuned the key parameters of the universe to increase the likelihood that life will emerge and produce outcomes that are successful in the larger process (e.g. a key outcome may be to produce life and intelligence that intentionally reproduces the universe and tunes the parameters of ‘offspring’ universes). Theory suggests that when life emerges on a planet, it moves along this trajectory of its own accord. However, at a particular point evolution will continue to advance only if organisms emerge that decide to advance the developmental process intentionally. The organisms must be prepared to make this commitment even though the ultimate nature and destination of the process is uncertain, and may forever remain unknown. Organisms that complete this transition to intentional evolution will drive the further development of life and intelligence in the universe. Humanity’s increasing understanding of the evolution of life in the universe is rapidly bringing it to the threshold of this major evolutionary transition.  相似文献   

13.
To reveal the structure underlying two-way two-mode object by variable data, Mirkin (1987) has proposed an additive overlapping clustering model. This model implies an overlapping clustering of the objects and a reconstruction of the data, with the reconstructed variable profile of an object being a summation of the variable profiles of the clusters it belongs to. Grasping the additive (overlapping) clustering structure of object by variable data may, however, be seriously hampered in case the data include a very large number of variables. To deal with this problem, we propose a new model that simultaneously clusters the objects in overlapping clusters and reduces the variable space; as such, the model implies that the cluster profiles and, hence, the reconstructed data profiles are constrained to lie in a lowdimensional space. An alternating least squares (ALS) algorithm to fit the new model to a given data set will be presented, along with a simulation study and an illustrative example that makes use of empirical data.  相似文献   

14.
K -means partitioning. We also describe some new features and improvements to the algorithm proposed by De Soete. Monte Carlo simulations have been conducted using different error conditions. In all cases (i.e., ultrametric or additive trees, or K-means partitioning), the simulation results indicate that the optimal weighting procedure should be used for analyzing data containing noisy variables that do not contribute relevant information to the classification structure. However, if the data involve error-perturbed variables that are relevant to the classification or outliers, it seems better to cluster or partition the entities by using variables with equal weights. A new computer program, OVW, which is available to researchers as freeware, implements improved algorithms for optimal variable weighting for ultrametric and additive tree clustering, and includes a new algorithm for optimal variable weighting for K-means partitioning.  相似文献   

15.
ConsiderN entities to be classified, with given weights, and a matrix of dissimilarities between pairs of them. The split of a cluster is the smallest dissimilarity between an entity in that cluster and an entity outside it. The single-linkage algorithm provides partitions intoM clusters for which the smallest split is maximum. We consider the problems of finding maximum split partitions with exactlyM clusters and with at mostM clusters subject to the additional constraint that the sum of the weights of the entities in each cluster never exceeds a given bound. These two problems are shown to be NP-hard and reducible to a sequence of bin-packing problems. A (N 2) algorithm for the particular caseM =N of the second problem is also presented. Computational experience is reported.Acknowledgments: Work of the first author was supported in part by AFOSR grants 0271 and 0066 to Rutgers University and was done in part during a visit to GERAD, Ecole Polytechnique de Montréal, whose support is gratefully acknowledged. Work of the second and third authors was supported by NSERC grant GP0036426 and by FCAR grant 89EQ4144. We are grateful to Silvano Martello and Paolo Toth for making available to us their program MTP for the bin-paking problem and to three anonymous referees for comments which helped to improve the presentation of the paper.  相似文献   

16.
Given a set of objects and a symmetric matrix of dissimilarities between them, Unidimensional Scaling is the problem of finding a representation by locating points on a continuum. Approximating dissimilarities by the absolute value of the difference between coordinates on a line constitutes a serious computational problem. This paper presents an algorithm that implements Simulated Annealing in a new way, via a strategy based on a weighted alternating process that uses permutations and point-wise translations to locate the optimal configuration. Explicit implementation details are given for least squares loss functions and for least absolute deviations. The weighted, alternating process is shown to outperform earlier implementations of Simulated Annealing and other optimization strategies for Unidimensional Scaling in run time efficiency, in solution quality, or in both.  相似文献   

17.
Block-Relaxation Approaches for Fitting the INDCLUS Model   总被引:1,自引:1,他引:0  
A well-known clustering model to represent I?×?I?×?J data blocks, the J frontal slices of which consist of I?×?I object by object similarity matrices, is the INDCLUS model. This model implies a grouping of the I objects into a prespecified number of overlapping clusters, with each cluster having a slice-specific positive weight. An INDCLUS model is fitted to a given data set by means of minimizing a least squares loss function. The minimization of this loss function has appeared to be a difficult problem for which several algorithmic strategies have been proposed. At present, the best available option seems to be the SYMPRES algorithm, which minimizes the loss function by means of a block-relaxation algorithm. Yet, SYMPRES is conjectured to suffer from a severe local optima problem. As a way out, based on theoretical results with respect to optimally designing block-relaxation algorithms, five alternative block-relaxation algorithms are proposed. In a simulation study it appears that the alternative algorithms with overlapping parameter subsets perform best and clearly outperform SYMPRES in terms of optimization performance and cluster recovery.  相似文献   

18.
Radical Constructivism has been defined as an‘unconventional approach to the problem ofknowledge and knowing’. Its unconventionalityis summarised by its claim that it isimpossible to attribute unique meaning toexperience – as no mind-independent yardstick canbe assumed to exist against which to identifyuniqueness, and hence to produce knowledge andknowing. In other words, it is claimed thatthere is no ‘reality’ that is knowable to allindividual knowers. This claim appearsindefensible by itself, as it does not explainwhy the successes of traditional science appearas such. However, it is defensible in thecontext of numerous failures to achieve uniqueattributions, or of the history of science.Even so, what is missing are concrete methodsand research designs. This often leaves RadicalConstructivism to be critical only, toconcentrate on justifying the impossibility ofsuccess without contributing itself.Where this is the case it reduces scientiststo individuals considered unable to communicatewith others on public (and unique)attributions-who may do so only by borrowingmethods from previous approaches. It is arguedthat a more valuable contribution is possibleif Radical Constructivism is seen as a responseto the challenge defined by frequent failuresof traditional approaches. The latter may beextended such that the extensions converge toRadical Constructivism. Such extensions arebased on reported observations, rather than onexperiences in general, and are to beattributed meanings – uniquely as well asnon-uniquely – by way of a collective. The lattershould allow its ‘actors’ to restrict whatmaintains the collective to what is observableto others, as well as use the collective torestrict their own observations. The study ofcollectives thus allows for the study ofrestrictions or values, and hence for includingsubjective or constructivist experiences beyond(reportable) observations.  相似文献   

19.
In various data settings, it is necessary to compare observations from disparate data sources. We assume the data is in the dissimilarity representation (P?kalska and Duin, 2005) and investigate a joint embedding method (Priebe et al., 2013) that results in a commensurate representation of disparate dissimilarities. We further assume that there are “matched” observations from different conditions which can be considered to be highly similar, for the sake of inference. The joint embedding results in the joint optimization of fidelity (preservation of within-condition dissimilarities) and commensurability (preservation of between-condition dissimilarities between matched observations). We show that the tradeoff between these two criteria can be made explicit using weighted raw stress as the objective function for multidimensional scaling. In our investigations, we use a weight parameter, w, to control the tradeoff, and choose match detection as the inference task. Our results show weights that are optimal (with respect to the inference task) are different than equal weights for commensurability and fidelity and the proposed weighted embedding scheme provides significant improvements in statistical power.  相似文献   

20.
The weighted linear choice model is one of the most popular models in the social sciences. In this model the utility of a choice object is represented as a weighted sum of attribute-level desirabilities, where the weights are attribute importances. In many empirical contexts the choice objects are such that individuals are highly correlated in terms of their desirability ordering of levels within attribute (e.g., price levels, durability levels, etc.) but may differ appreciably in terms of their evaluations of each attribute's importance.In this paper we address the problem of how dissimilar two individuals may be, in a rank correlation sense, given that they agree completely on the desirability ordering of levels within attributes, but may disagree considerably regarding the importance they attach to the attributes themselves. The problem has interesting implications regarding the potential value of clustering individuals' utility functions for market segmentation or other such purposes.The authors would like to thank the editor and three anonymous reviewers for their excellent comments on an earlier draft of the paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号