首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 23 毫秒
1.
A latent class vector model for preference ratings   总被引:1,自引:1,他引:1  
A latent class formulation of the well-known vector model for preference data is presented. Assuming preference ratings as input data, the model simultaneously clusters the subjects into a small number of homogeneous groups (or latent classes) and constructs a joint geometric representation of the choice objects and the latent classes according to a vector model. The distributional assumptions on which the latent class approach is based are analogous to the distributional assumptions that are consistent with the common practice of fitting the vector model to preference data by least squares methods. An EM algorithm for fitting the latent class vector model is described as well as a procedure for selecting the appropriate number of classes and the appropriate number of dimensions. Some illustrative applications of the latent class vector model are presented and some possible extensions are discussed. Geert De Soete is supported as “Bevoegdverklaard Navorser” of the Belgian “Nationaal Fonds voor Wetenschappelijk Onderzoek.”  相似文献   

2.
The issue of determining “the right number of clusters” in K-Means has attracted considerable interest, especially in the recent years. Cluster intermix appears to be a factor most affecting the clustering results. This paper proposes an experimental setting for comparison of different approaches at data generated from Gaussian clusters with the controlled parameters of between- and within-cluster spread to model cluster intermix. The setting allows for evaluating the centroid recovery on par with conventional evaluation of the cluster recovery. The subjects of our interest are two versions of the “intelligent” K-Means method, ik-Means, that find the “right” number of clusters by extracting “anomalous patterns” from the data one-by-one. We compare them with seven other methods, including Hartigan’s rule, averaged Silhouette width and Gap statistic, under different between- and within-cluster spread-shape conditions. There are several consistent patterns in the results of our experiments, such as that the right K is reproduced best by Hartigan’s rule – but not clusters or their centroids. This leads us to propose an adjusted version of iK-Means, which performs well in the current experiment setting.  相似文献   

3.
A modified CANDECOMP algorithm is presented for fitting the metric version of the Extended INDSCAL model to three-way proximity data. The Extended INDSCAL model assumes, in addition to the common dimensions, a unique dimension for each object. The modified CANDECOMP algorithm fits the Extended INDSCAL model in a dimension-wise fashion and ensures that the subject weights for the common and the unique dimensions are nonnegative. A Monte Carlo study is reported to illustrate that the method is fairly insensitive to the choice of the initial parameter estimates. A second Monte Carlo study shows that the method is able to recover an underlying Extended INDSCAL structure if present in the data. Finally, the method is applied for illustrative purposes to some empirical data on pain relievers. In the final section, some other possible uses of the new method are discussed. Geert De Soete is supported as “Bevoegdverklaard Navorser” of the Belgian “Nationaal Fonds voor Wetenschappelijik Onderzoek”.  相似文献   

4.
After sketching the historical development of “emergence” and noting several recent problems relating to “emergent properties”, this essay proposes that properties may be either “emergent” or “mergent” and either “intrinsic” or “extrinsic”. These two distinctions define four basic types of change: stagnation, permanence, flux, and evolution. To illustrate how emergence can operate in a purely logical system, the Geometry of Logic is introduced. This new method of analyzing conceptual systems involves the mapping of logical relations onto geometrical figures, following either an analytic or a synthetic pattern (or both together). Evolution is portrayed as a form of discontinuous change characterized by emergent properties that take on an intrinsic quality with respect to the object(s) or proposition(s) involved. Causal leaps, not continuous development, characterize the evolution of human life in a developing foetus, of a thought out of certain brain states, of a new idea (or insight) out of ordinary thoughts, and of a great person out of a set of historical experiences. The tendency to assume that understanding evolutionary change requires a step-by-step explanation of the historical development that led to the appearance of a certain emergent property is thereby discredited.  相似文献   

5.
The aim of this contribution is to critically examine the metaphysical presuppositions that prevail in (Stewart in Found Sci 15(4):395–409, 2010a) answer to the question “are we in the midst of a developmental process?” as expressed in his statement “that humanity has discovered the trajectory of past evolution and can see how it is likely to continue in the future”.  相似文献   

6.
Bayesian classification is currently of considerable interest. It provides a strategy for eliminating the uncertainty associated with a particular choice of classifiermodel parameters, and is the optimal decision-theoretic choice under certain circumstances when there is no single “true” classifier for a given data set. Modern computing capabilities can easily support the Markov chain Monte Carlo sampling that is necessary to carry out the calculations involved, but the information available in these samples is not at present being fully utilised. We show how it can be allied to known results concerning the “reject option” in order to produce an assessment of the confidence that can be ascribed to particular classifications, and how these confidence measures can be used to compare the performances of classifiers. Incorporating these confidence measures can alter the apparent ranking of classifiers as given by straightforward success or error rates. Several possible methods for obtaining confidence assessments are described, and compared on a range of data sets using the Bayesian probabilistic nearest-neighbour classifier.  相似文献   

7.
Taxonomy Based modeling was applied to describe drivers’ mental models of variable message signs (VMS’s) displayed on expressways. Progress in road telematics has made it possible to introduce variable message signs (VMS’s). Sensors embedded in the carriageway every 500m record certain variables (speed, flow rate, etc.) that are transformed in real time into “driving times” to a given destination if road conditions do not change. VMS systems are auto-regulative Man-Machine (AMMI) systems which incorporate a model of the user: if the traffic flow is too high, then drivers should choose alternative routes. In so doing, the traffic flow should decrease. The model of the user is based on suppositions such as: people do not like to waste time, they fully understand the displayed messages, they trust the displayed values, they know of alternative routes. However, people also have a model of the way the system functions. And if they do not believe the contents of the message, they will not act as expected. We collected data through interviews with drivers using the critical incidents technique (Flanagan, 1985). Results show that the mental models that drivers have of the way the VMS system works are various but not numerous and that most of them differ from the“ideal expert” mental model. It is clear that users don’t have an adequate model of how the VMS system works and that VMS planners have a model of user behaviour that does not correspond to the behaviour of the drivers we interviewed. Finally, Taxonomy Based Modeling is discussed as a tool for mental model remediation.  相似文献   

8.
Towards a Hierarchical Definition of Life,the Organism,and Death   总被引:3,自引:3,他引:0  
Despite hundreds of definitions, no consensus exists on a definition of life or on the closely related and problematic definitions of the organism and death. These problems retard practical and theoretical development in, for example, exobiology, artificial life, biology and evolution. This paper suggests improving this situation by basing definitions on a theory of a generalized particle hierarchy. This theory uses the common denominator of the “operator” for a unified ranking of both particles and organisms, from elementary particles to animals with brains. Accordingly, this ranking is called “the operator hierarchy”. This hierarchy allows life to be defined as: matter with the configuration of an operator, and that possesses a complexity equal to, or even higher than the cellular operator. Living is then synonymous with the dynamics of such operators and the word organism refers to a select group of operators that fit the definition of life. The minimum condition defining an organism is its existence as an operator, construction thus being more essential than metabolism, growth or reproduction. In the operator hierarchy, every organism is associated with a specific closure, for example, the nucleus in eukaryotes. This allows death to be defined as: the state in which an organism has lost its closure following irreversible deterioration of its organization. The generality of the operator hierarchy also offers a context to discuss “life as we do not know it”. The paper ends with testing the definition’s practical value with a range of examples.  相似文献   

9.
Investigations with electrometers in the 1770s led Volta to envision mobile charge in electrical conductors as a compressible fluid. A pressure-like condition in this fluid, which Volta described as the fluid’s “effort to push itself out” of its conducting container, was the causal agent that makes the fluid move. In this paper I discuss Volta’s use of analogy and imagery in model building, and compare with a successful contemporary conceptual approach to introducing ideas about electric potential in instruction. The concept that today is called “electric potential” was defined mathematically by Poisson in 1811. It was understood after about 1850 to predict the same results in conducting matter as Volta’s pressure-like concept—and to predict electrostatic effects in the exterior space where Volta’s concept had nothing to say. Complete quantification in addition to greater generality made the mathematical concept a superior research tool for scientists. However, its spreading use in instruction has marginalized approaches to model building based on the analogy and imagery resources that students bring into the classroom. Data from pre and post testing in high schools show greater conceptual and confidence gains using the new conceptual approach than using conventional instruction. This provides evidence for reviving Volta’s compressible fluid model as an intuitive foundation which can then be modified to include electrostatic distant action. Volta tried to modify his compressible fluid model to include distant action, using imagery borrowed from distant heating by a flame. This project remained incomplete, because he did not envision an external field mediating the heating. However, pursuing Volta’s strategy of model modification to completion now enables students taught with the new conceptual approach to add distant action to an initial compressible fluid model. I suggest that a partial correspondence to the evolving model sequence that works for beginning students can help illuminate Volta’s use of intermediate explanatory models.
Melvin S. SteinbergEmail:
  相似文献   

10.
Possibly the most fundamental scientific problem is the origin of time and causality. The inherent difficulty is that all scientific theories of origins and evolution consider the existence of time and causality as given. We tackle this problem by starting from the concept of self-organization, which is seen as the spontaneous emergence of order out of primordial chaos. Self-organization can be explained by the selective retention of invariant or consistent variations, implying a breaking of the initial symmetry exhibited by randomness. In the case of time, we start from a random graph connecting primitive “events”. Selection on the basis of consistency eliminates cyclic parts of the graph, so that transitive closure can transform it into a partial order relation of precedence. Causality is assumed to be carried by causal “agents” which undergo a more traditional variation and selection, giving rise to causal laws that are partly contingent, partly necessary.  相似文献   

11.
A clustering that consists of a nested set of clusters may be represented graphically by a tree. In contrast, a clustering that includes non-nested overlapping clusters (sometimes termed a “nonhierarchical” clustering) cannot be represented by a tree. Graphical representations of such non-nested overlapping clusterings are usually complex and difficult to interpret. Carroll and Pruzansky (1975, 1980) suggested representing non-nested clusterings with multiple ultrametric or additive trees. Corter and Tversky (1986) introduced the extended tree (EXTREE) model, which represents a non-nested structure as a tree plus overlapping clusters that are represented by marked segments in the tree. We show here that the problem of finding a nested (i.e., tree-structured) set of clusters in an overlapping clustering can be reformulated as the problem of finding a clique in a graph. Thus, clique-finding algorithms can be used to identify sets of clusters in the solution that can be represented by trees. This formulation provides a means of automatically constructing a multiple tree or extended tree representation of any non-nested clustering. The method, called “clustrees”, is applied to several non-nested overlapping clusterings derived using the MAPCLUS program (Arabie and Carroll 1980).  相似文献   

12.
Certain cognitive and philosophical aspects of the concept of conceivability with intended or established diversion from (putative) reality are discussed. The “coherence gap problem” arises when certain fragments of the real world are replaced with imaginary situations while most details are (intentionally or not) ignored. Another issue, “the spectator problem”, concerns the participation of the conceiver himself in the world conceived. Three different examples of conceivability are used to illustrate our points, namely thought experiments in physics, a hypothetical world devoid of consciousness (zombie world), and virtual reality. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

13.
In recent years a general consensus has been developing in the philosophy of science to the effect that strong social constructivist accounts are unable to adequately account for scientific practice. Recently, however, a number of commentators have formulated an attenuated version of constructivism that purports to avoid the difficulties that plague the stronger claims of its predecessors. Interestingly this attenuated form of constructivism finds philosophical support from a relatively recent turn in the literature concerning scientific realism. Arthur Fine and a number of other commentators have argued that the realism debate ought to be abandoned. The rationale for this argument is that the debate is sterile for it has, it is claimed, no consequence for actual scientific practice, and therefore does not advance our understanding of science or its practice. Recent “softer” accounts of social constructivism also hold a similar agnostic stance to the realism question. I provide a survey of these various agnostic stances and show how they form a general position that I shall refer to as “the anti-philosophical stance”. I then demonstrate that the anti-philosophical stance fails by identifying difficulties that attend its proposal to ban philosophical interpretation. I also provide examples of instances where philosophical stances to the realism question affect scientific practice.  相似文献   

14.
All the attempts to find the justification of the privileged evolution of phenomena exclusively in the external world need to refer to the inescapable fact that we are living in such an asymmetric universe. This leads us to look for the origin of the “arrow of time” in the relationship between the subject and the world. The anthropic argument shows that the arrow of time is the condition of the possibility of emergence and maintenance of life in the universe. Moreover, according to Bohr’s, Poincaré’s and Watanabe’s analysis, this agreement between the earlier-later direction of entropy increase and the past-future direction of life is the very condition of the possibility for meaningful action, representation and creation. Beyond this relationship of logical necessity between the meaning process and the arrow of time the question of their possible physical connection is explored. To answer affirmatively to this question, the meaning process is modelled as an evolving tree-like structure, called “Semantic Time”, where thermodynamic irreversibility can be shown. Time is the substance I am made of. Time is a river which sweeps me along, but I am the river ; it is a tiger which destroys me, but I am the tiger ; it is a fire which consumes me, but I am the fire. – (Jorge Luis Borges)  相似文献   

15.
Scientific anomalies are observations and facts that contradict current scientific theories and they are instrumental in scientific theory change. Philosophers of science have approached scientific theory change from different perspectives as Darden (Theory change in science: Strategies from Mendelian genetics, 1991) observes: Lakatos (In: Lakatos, Musgrave (eds) Criticism and the growth of knowledge, 1970) approaches it as a progressive “research programmes” consisting of incremental improvements (“monster barring” in Lakatos, Proofs and refutations: The logic of mathematical discovery, 1976), Kuhn (The structure of scientific revolutions, 1996) observes that changes in “paradigms” are instigated by a crisis from some anomaly, and Hanson (In: Feigl, Maxwell (eds) Current issues in the philosophy of science, 1961) proposes that discovery does not begin with hypothesis but with some “problematic phenomena requiring explanation”. Even though anomalies are important in all of these approaches to scientific theory change, there have been only few investigations into the specific role anomalies play in scientific theory change. Furthermore, much of these approaches focus on the theories themselves and not on how the scientists and their experiments bring about scientific change (Gooding, Experiment and the making of meaning: Human agency in scientific observation and experiment, 1990). To address these issues, this paper approaches scientific anomaly resolution from a meaning construction point of view. Conceptual integration theory (Fauconnier and Turner, Cogn Sci 22:133–187, 1996; The way we think: Conceptual blending and mind’s hidden complexities, 2002) from cognitive linguistics describes how one constructs meaning from various stimuli, such as text and diagrams, through conceptual integration or blending. The conceptual integration networks that describe the conceptual integration process characterize cognition that occurs unconsciously during meaning construction. These same networks are used to describe some of the cognition while resolving an anomaly in molecular genetics called RNA interference (RNAi) in a case study. The RNAi case study is a cognitive-historical reconstruction (Nersessian, In: Giere (ed) Cognitive models of science, 1992) that reconstructs how the RNAi anomaly was resolved. This reconstruction traces four relevant molecular genetics publications in describing the cognition necessary in accounting for how RNAi was resolved through strategies (Darden 1991), abductive reasoning (Peirce, In: Hartshorne, Weiss (eds) Collected papers, 1958), and experimental reasoning (Gooding 1990). The results of the case study show that experiments play a crucial role in formulating an explanation of the RNAi anomaly and the integration networks describe the experiments’ role. Furthermore, these results suggest that RNAi anomaly resolution is embodied. It is embodied in a sense that cognition described in the cognitive-historical reconstruction is experientially based.
John J. SungEmail:
  相似文献   

16.
Attemts to explain causal paradoxes of Quantum Mechanics (QM) have tried to solve the problems within the framework of Quantum Electrodynamics (QED). We will show, that this is impossible. The original theory of QED by Dirac (Proc Roy Soc A117:610, 1928) formulated in its preamble four preliminary requirements that the new theory should meet. The first of these requirements was that the theory must be causal. Causality is not to be derived as a consequence of the theory since it was a precondition for the formulation of the theory; it has been constructed so that it be causal. Therefore, causal paradoxes logically cannot be explained within the framework of QED. To transcend this problem we should consider the following points: Dirac himself stated in his original paper (1928) that his theory was only an approximation. When he returned to improve the theory later (Proc Roy Soc A209, 1951), he noted that the new theory “involves only the ratio e/m, not e and m separately”. This is a sign that although the electromagnetic effects (whose source is e) are magnitudes stronger than the gravitational effects (whose source is m), the two are coupled. Already in 1919, Einstein noted that “the elementary formations which go to make up the atom” are influenced by gravitational forces. Although in that form the statement proved not to be exactly correct, the effects of gravitation on QM phenomena have been established. The conclusion is that we should seek a resolution for the causal paradoxes in the framework of the General Theory of Relativity (GTR)—in contrast to QED, which involves only the Special Theory of Relativity (STR). We show that causality is necessarily violated in GTR. This follows from the curvature of the space-time. Although those effects are very small, one cannot ignore their influence in the case of the so-called “paradox phenomena”.  相似文献   

17.
The theory of the tight span, a cell complex that can be associated to every metric D, offers a unifying view on existing approaches for analyzing distance data, in particular for decomposing a metric D into a sum of simpler metrics as well as for representing it by certain specific edge-weighted graphs, often referred to as realizations of D. Many of these approaches involve the explicit or implicit computation of the so-called cutpoints of (the tight span of) D, such as the algorithm for computing the “building blocks” of optimal realizations of D recently presented by A. Hertz and S. Varone. The main result of this paper is an algorithm for computing the set of these cutpoints for a metric D on a finite set with n elements in O(n3) time. As a direct consequence, this improves the run time of the aforementioned O(n6)-algorithm by Hertz and Varone by “three orders of magnitude”.  相似文献   

18.
This paper suggests an epistemic interpretation of Belnap’s branching space-times theory based on Everett’s relative state formulation of the measurement operation in quantum mechanics. The informational branching models of the universe are evolving structures defined from a partial ordering relation on the set of memory states of the impersonal observer. The totally ordered set of their information contents defines a linear “time” scale to which the decoherent alternative histories of the informational universe can be referred—which is quite necessary for assigning them a probability distribution. The “historical” state of a physical system is represented in an appropriate extended Hilbert space and an algebra of multi-branch operators is developed. An age operator computes the informational depth of historical states and its standard deviation can be used to provide a universal information/energy uncertainty relation. An information operator computes the encoding complexity of historical states, the rate of change of its average value accounting for the process of correlation destruction inherent to the branching dynamics. In the informational branching models of the universe, the asymmetry of phenomena in nature appears as a mere consequence of the subject’s activity of measuring, which defines the flow of time-information.  相似文献   

19.
A common practice in cross validation research in the behavioral sciences is to employ either the product moment correlation or a simple tabulation of first-choice “hits” for measuring the accuracy with which various preference models predict subjects’ responses to a holdout sample of choice objects. We propose a nonparametric approach for summarizing the accuracy of predicted rankings across a set of holdout-sample options. The methods that we develop contain a novel way to deal with ties and an approach to the different weighting of rank positions.  相似文献   

20.
Jan Greben criticized fine-tuning by taking seriously the idea that “nature is quantum mechanical”. I argue that this quantum view is limited, and that fine-tuning is real, in the sense that our current physical models require fine-tuning. Second, I examine and clarify many difficult and fundamental issues raised by Rüdiger Vaas’ comments on Cosmological Artificial Selection.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号