共查询到20条相似文献,搜索用时 15 毫秒
1.
A probabilistic DEDICOM model was proposed for mobility tables. The model attempts to explain observed transition probabilities
by a latent mobility table and a set of transition probabilities from latent classes to observed classes. The model captures
asymmetry in observed mobility tables by asymmetric latent mobility tables. It may be viewed as a special case of both the
latent class model and DEDICOM with special constraints. A maximum penalized likelihood (MPL) method was developed for parameter
estimation. The EM algorithm was adapted for the MPL estimation. Two examples were given to illustrate the proposed method.
The work reported in this paper has been supported by grant A6394 to the first author from the Natural Sciences and Engineering
Research Council of Canada and by a fellowship of the Royal Netherlands Academy of Arts and Sciences to the second author.
We would like to thank anonymous reviewers for their insightful comments. 相似文献
2.
3.
We propose a non-negative real-valued model of hierarchical classes (HICLAS) for two-way two-mode data. Like the other members
of the HICLAS family, the non-negative real-valued model (NNRV-HICLAS) implies simultaneous hierarchically organized classifications
of all modes involved in the data. A distinctive feature of the novel model is that it yields continuous, non-negative real-valued
reconstructed data, which considerably expands the application range of the HICLAS family. The expansion implies a major algorithmic
challenge as it involves a move from the typical discrete optimization problems in HICLAS to a mixed discrete-continuous one.
To solve this mixed discrete-continuous optimization problem, a two-stage algorithm combining a simulated annealing and an
alternating local descent stage is proposed. Subsequently it is evaluated in a simulation study. Finally, the NNRVHICLAS model
is applied to an empirical data set on anger. 相似文献
4.
Jacob Stegenga 《Foundations of Science》2016,21(1):35-49
Consensus conferences are social techniques which involve bringing together a group of scientific experts, and sometimes also non-experts, in order to increase the public role in science and related policy, to amalgamate diverse and often contradictory evidence for a hypothesis of interest, and to achieve scientific consensus or at least the appearance of consensus among scientists. For consensus conferences that set out to amalgamate evidence, I propose three desiderata: Inclusivity (the consideration of all available evidence), Constraint (the achievement of some agreement of intersubjective assessments of the hypothesis of interest), and Evidential Complexity (the evaluation of available evidence based on a plurality of relevant evidential criteria). Two examples suggest that consensus conferences can readily satisfy Inclusivity and Evidential Complexity, but consensus conferences do not as easily satisfy Constraint. I end by discussing the relation between social inclusivity and the three desiderata. 相似文献
5.
L. Andries van der Ark Peter G. M. van der Heijden Dirk Sikkel 《Journal of Classification》1999,16(1):117-137
end-member
model .
A major drawback of the latent budget model is that, in general, the
model is not identifiable, which complicates the interpretation of the
model considerably. This paper studies the geometry and identifiability
of the latent budget model. Knowledge of the geometric structure of the
model is used to specify an appropriate criterion to identify the model.
The results are illustrated by an empirical data set. 相似文献
6.
7.
从不同领域信息学的比较研究再论信息的本质 总被引:1,自引:0,他引:1
论文试图从不同领域信息学的比较研究入手,揭示各种信息学的研究在做什么?它们所提出的信息到底指的是什么?分析在各种信息学的研究过程中所产生的一些共性问题,如信息是否守恒,信息有无新旧之分以及部分信息概念和相关概念的意义与合理性,提出对信息本质的一些思考,即信息是什么,信息何以为信息以及信息的客观性问题. 相似文献
8.
Andrzej M?odak 《Journal of Classification》2011,28(3):327-362
The paper contains a proposal of interval data clustering related to given social and economic objects characterized by many interval variables. This multivariate approach is based on an original conception of interval quantiles constructed using a special definition derived from the notion of the Hausdorff distance. In order to improve the quality of classification, the obtained interval quantile classes can be next aggregated into larger merged classes. The efficiency of our method can be assessed using especially defined indices of entropy and volume coefficients. The second notion replaces the classical concept of area, which is not applicable in this case. 相似文献
9.
Charles Bouveyron 《Journal of Classification》2014,31(1):49-84
In supervised learning, an important issue usually not taken into account by classical methods is that a class represented in the test set may have not been encountered earlier in the learning phase. Classical supervised algorithms will automatically label such observations as belonging to one of the known classes in the training set and will not be able to detect new classes. This work introduces a model-based discriminant analysis method, called adaptive mixture discriminant analysis (AMDA), which can detect several unobserved groups of points and can adapt the learned classifier to the new situation. Two EM-based procedures are proposed for parameter estimation and model selection criteria are used for selecting the actual number of classes. Experiments on artificial and real data demonstrate the ability of the proposed method to deal with complex and real-world problems. The proposed approach is also applied to the detection of unobserved communities in social network analysis. 相似文献
10.
北京的城市病之所以比其他城市更为严重,根本之点在于城市为维系自身的运行而实际上步入了工业化竞争的行列,而工业化又借助政治中心和文化中心的优势实现了加速发展.这个问题依靠新建卫星城是解决不了的,一条根本出路在于实行京津合并,让天津成为北京的工业因区,以经济功能的剥离实现首都城区的卸荷,从而确保政治中心和文化中心功能的充分发挥. 相似文献
11.
论现代科技革命引发的信息传播中的问题及其对策 总被引:6,自引:0,他引:6
现代科技革命对信息传播方式带来巨大影响。进入多媒体和网络传播的时代,面临着信息安全与知识产权保护、信息污染、信息障碍、信息资源配置和信息控制等问题,需要研究对策,保障信息的有效传播,促进科技、经济和社会的协调发展。 相似文献
12.
Daniël W. van der Palm L. Andries van der Ark Jeroen K. Vermunt 《Journal of Classification》2016,33(1):52-72
Traditionally latent class (LC) analysis is used by applied researchers as a tool for identifying substantively meaningful clusters. More recently, LC models have also been used as a density estimation tool for categorical variables. We introduce a divisive LC (DLC) model as a density estimation tool that may offer several advantages in comparison to a standard LC model. When using an LC model for density estimation, a considerable number of increasingly large LC models may have to be estimated before sufficient model-fit is achieved. A DLC model consists of a sequence of small LC models. Therefore, a DLC model can be estimated much faster and can easily utilize multiple processor cores, meaning that this model is more widely applicable and practical. In this study we describe the algorithm of fitting a DLC model, and discuss the various settings that indirectly influence the precision of a DLC model as a density estimation tool. These settings are illustrated using a synthetic data example, and the best performing algorithm is applied to a real-data example. The generated data example showed that, using specific decision rules, a DLC model is able to correctly model complex associations amongst categorical variables. 相似文献
13.
a posteriori blockmodeling for graphs is proposed. The model assumes that the vertices of the graph are partitioned into two unknown blocks
and that the probability of an edge between two vertices depends only on the blocks to which they belong. Statistical procedures
are derived for estimating the probabilities of edges and for predicting the block structure from observations of the edge
pattern only. ML estimators can be computed using the EM algorithm, but this strategy is practical only for small graphs.
A Bayesian estimator, based on the Gibbs sampling, is proposed. This estimator is practical also for large graphs. When ML
estimators are used, the block structure can be predicted based on predictive likelihood. When Gibbs sampling is used, the
block structure can be predicted from posterior predictive probabilities. A side result is that when the number of vertices
tends to infinity while the probabilities remain constant, the block structure can be recovered correctly with probability
tending to 1. 相似文献
14.
Latent class (LC) analysis is used by social, behavioral, and medical science researchers among others as a tool for clustering (or unsupervised classification) with categorical response variables, for analyzing the agreement between multiple raters, for evaluating the sensitivity and specificity of diagnostic tests in the absence of a gold standard, and for modeling heterogeneity in developmental trajectories. Despite the increased popularity of LC analysis, little is known about statistical power and required sample size in LC modeling. This paper shows how to perform power and sample size computations in LC models using Wald tests for the parameters describing association between the categorical latent variable and the response variables. Moreover, the design factors affecting the statistical power of these Wald tests are studied. More specifically, we show how design factors which are specific for LC analysis, such as the number of classes, the class proportions, and the number of response variables, affect the information matrix. The proposed power computation approach is illustrated using realistic scenarios for the design factors. A simulation study conducted to assess the performance of the proposed power analysis procedure shows that it performs well in all situations one may encounter in practice. 相似文献
15.
Probabilistic feature models (PFMs) can be used to explain binary rater judgements about the associations between two types of elements (e.g., objects and attributes) on the basis of binary latent features. In particular, to explain observed object-attribute associations PFMs assume that respondents classify both objects and attributes with respect to a, usually small, number of binary latent features, and that the observed object-attribute association is derived as a specific mapping of these classifications. Standard PFMs assume that the object-attribute association probability is the same according to all respondents, and that all observations are statistically independent. As both assumptions may be unrealistic, a multilevel latent class extension of PFMs is proposed which allows objects and/or attribute parameters to be different across latent rater classes, and which allows to model dependencies between associations with a common object (attribute) by assuming that the link between features and objects (attributes) is fixed across judgements. Formal relationships with existing multilevel latent class models for binary three-way data are described. As an illustration, the models are used to study rater differences in product perception and to investigate individual differences in the situational determinants of anger-related behavior. 相似文献
16.
情报学家尤金·加菲尔德对STS问题研究的贡献 总被引:1,自引:0,他引:1
本文考察了美国著名情报学家尤金.加菲尔德对科技与社会(STS)研究所做的贡献。运用WordSmithTools词频分析软件对加菲尔德所著1447篇(部)论著的标题进行分析,并结合部分论文摘要和全文,我们对这位情报学家在STS研究领域的活动以及主要成就有了较深入的了解。加菲尔德的贡献主要分为两个方面。一是间接的方法论的贡献,这主要指他创建的SCI等大型引文索引数据库为STS研究提供了数据形式的经验素材,他参与创建的引文分析方法为STS研究提供了一种工具。二是他直接从事了STS问题的研究,例如,运用引文分析方法对诺贝尔奖获得者进行研究,以及在科技政策、科技伦理、科学交流等方面发表自己的见解。 相似文献
17.
从信息本质到信息哲学—对半个世纪以来信息科学哲学探讨的回顾与总结 总被引:16,自引:0,他引:16
从申农信息论到虚拟现实,信息科学已经走过了半个多世纪的历程,与此相应,哲学的反思也从信息本质发展到虚拟哲学。本文试图对这两个历程的演变和探讨的问题做一初步的分析和总结,并提出建立信息哲学的必要性。 相似文献
18.
通过对勾股定理与毕达哥拉斯定理发现与证明方法比较,考证了中西方对勾股定理的发现均符合现代科学发现的定义,提出了中国与西方几乎同时独立发现勾股定理。根据时间敏感性差异原理,指出了古代科学发现优先权的确立原则和判定标准,中西方这种独立的发现开辟了中西方科学发展的不同模式。 相似文献
19.
本文考察了美国著名情报学家尤金·加菲尔德研究诺贝尔奖的视角,通过研究分析加菲尔德个人网页中关于诺贝尔奖的51篇原文,我们对这位情报学家研究诺贝尔奖的视角有了比较深入的了解。加菲尔德研究引文分析能否预测诺贝尔奖,他们是否都写过引文杰作,获奖项目是否是所在领域的研究前沿,最高被引频次、获得诺贝尔奖和科学承认之间的关系,职业合作伙伴的重要性,关注日本的科学研究环境,运用引文分析确定是否属于早期发现等七个方面来发表自己独特的见解。 相似文献
20.
流行的死亡标准是全脑死亡标准和脑干死亡标准。然而随着脑科学的发展,全脑死亡标准和脑干死亡标准已不足以说明死亡的含义。通过分析流行的死亡标准可以看到,无论在理论还是实践上,全脑死亡标准和脑干死亡标准都面临着根本的困难。基于个人同一性的理论视角,脑半球死亡与新皮层死亡是可以得到合理哲学辩护的死亡标准。 相似文献