共查询到20条相似文献,搜索用时 781 毫秒
1.
In this paper we describe in some detail a formal computer model of inferential discourse based on a belief system. The key
issue is that a logical model in a computer, based on rational sets, can usefully model a human situation based on irrational
sets. The background of this work is explained elsewhere, as is the issue of rational and irrational sets (Billinge and Addis,
in: Magnani and Dossena (eds.), Computing, philosophy and cognition, 2004; Stepney et al., Journey: Non-classical philosophy—socially
sensitive computing in journeys non-classical computation: A grand challenge for computing research, 2004). The model is based
on the Belief System (Addis and Gooding, Proceedings of the AISB’99 Symposium on Scientific Creativity, 1999) and it provides
a mechanism for choosing queries based on a range of belief. We explain how it provides a way to update the belief based on
query results, thus modelling others’ experience by inference. We also demonstrate that for the same internal experience,
different models can be built for different actors.
相似文献
Tom AddisEmail: |
2.
Helena Knyazeva 《Foundations of Science》2009,14(3):167-179
The modern conception of enactive cognition is under discussion from the standpoint concerning the notions of nonlinear dynamics
and synergetics. The contribution of Francisco Varela and his precursors is considered. It is shown that the perceptual and
mental processes are bound up with the “architecture” of human body and nonlinear and circular connecting links between the
subject of cognition and the world constructed by him can be metaphorically called a nonlinear cobweb of cognition. Cognition
is an autopoietic activity because it is directed to the search of elements that are missed; it serves to completing integral
structures.
相似文献
Helena KnyazevaEmail: |
3.
In spite of its success, Neo-Darwinism is faced with major conceptual barriers to further progress, deriving directly from
its metaphysical foundations. Most importantly, neo-Darwinism fails to recognize a fundamental cause of evolutionary change,
“niche construction”. This failure restricts the generality of evolutionary theory, and introduces inaccuracies. It also hinders
the integration of evolutionary biology with neighbouring disciplines, including ecosystem ecology, developmental biology,
and the human sciences. Ecology is forced to become a divided discipline, developmental biology is stubbornly difficult to
reconcile with evolutionary theory, and the majority of biologists and social scientists are still unhappy with evolutionary
accounts of human behaviour. The incorporation of niche construction as both a cause and a product of evolution removes these
disciplinary boundaries while greatly generalizing the explanatory power of evolutionary theory.
相似文献
Kevin N. LalandEmail: |
4.
Technology moves us to a better world. We contend that through technology people can simplify and solve moral tasks when they
are in presence of incomplete information and possess a diminished capacity to act morally. Many external things, usually
inert from the moral point of view, can be transformed into the so-called moral mediators. Hence, not all of the moral tools
are inside the head, many of them are shared and distributed in “external” objects and structures which function as ethical
devices.
相似文献
Emanuele BardoneEmail: |
5.
Stephen Palmquist 《Foundations of Science》2007,12(1):9-37
After sketching the historical development of “emergence” and noting several recent problems relating to “emergent properties”,
this essay proposes that properties may be either “emergent” or “mergent” and either “intrinsic” or “extrinsic”. These two
distinctions define four basic types of change: stagnation, permanence, flux, and evolution. To illustrate how emergence can
operate in a purely logical system, the Geometry of Logic is introduced. This new method of analyzing conceptual systems involves
the mapping of logical relations onto geometrical figures, following either an analytic or a synthetic pattern (or both together).
Evolution is portrayed as a form of discontinuous change characterized by emergent properties that take on an intrinsic quality
with respect to the object(s) or proposition(s) involved. Causal leaps, not continuous development, characterize the evolution
of human life in a developing foetus, of a thought out of certain brain states, of a new idea (or insight) out of ordinary
thoughts, and of a great person out of a set of historical experiences. The tendency to assume that understanding evolutionary
change requires a step-by-step explanation of the historical development that led to the appearance of a certain emergent
property is thereby discredited. 相似文献
6.
Melvin S. Steinberg 《Foundations of Science》2008,13(2):163-175
Investigations with electrometers in the 1770s led Volta to envision mobile charge in electrical conductors as a compressible
fluid. A pressure-like condition in this fluid, which Volta described as the fluid’s “effort to push itself out” of its conducting
container, was the causal agent that makes the fluid move. In this paper I discuss Volta’s use of analogy and imagery in model
building, and compare with a successful contemporary conceptual approach to introducing ideas about electric potential in
instruction. The concept that today is called “electric potential” was defined mathematically by Poisson in 1811. It was understood
after about 1850 to predict the same results in conducting matter as Volta’s pressure-like concept—and to predict electrostatic
effects in the exterior space where Volta’s concept had nothing to say. Complete quantification in addition to greater generality
made the mathematical concept a superior research tool for scientists. However, its spreading use in instruction has marginalized
approaches to model building based on the analogy and imagery resources that students bring into the classroom. Data from
pre and post testing in high schools show greater conceptual and confidence gains using the new conceptual approach than using
conventional instruction. This provides evidence for reviving Volta’s compressible fluid model as an intuitive foundation
which can then be modified to include electrostatic distant action. Volta tried to modify his compressible fluid model to
include distant action, using imagery borrowed from distant heating by a flame. This project remained incomplete, because
he did not envision an external field mediating the heating. However, pursuing Volta’s strategy of model modification to completion
now enables students taught with the new conceptual approach to add distant action to an initial compressible fluid model.
I suggest that a partial correspondence to the evolving model sequence that works for beginning students can help illuminate
Volta’s use of intermediate explanatory models.
相似文献
Melvin S. SteinbergEmail: |
7.
The “DNA is a program” metaphor is still widely used in Molecular Biology and its popularization. There are good historical
reasons for the use of such a metaphor or theoretical model. Yet we argue that both the metaphor and the model are essentially
inadequate also from the point of view of Physics and Computer Science. Relevant work has already been done, in Biology, criticizing the programming paradigm. We will refer to empirical evidence
and theoretical writings in Biology, although our arguments will be mostly based on a comparison with the use of differential
methods (in Molecular Biology: a mutation or alike is observed or induced and its phenotypic consequences are observed) as
applied in Computer Science and in Physics, where this fundamental tool for empirical investigation originated and acquired
a well-justified status. In particular, as we will argue, the programming paradigm is not theoretically sound as a causal(as in Physics) or deductive(as in Programming) framework for relating the genome to the phenotype, in contrast to the physicalist and computational grounds that this paradigm claims
to propose.
相似文献
Giuseppe LongoEmail: URL: http://www.di.ens.fr/users/longo |
8.
Franc Rottiers 《Foundations of Science》2012,17(1):39-41
The aim of this contribution is to critically examine the metaphysical presuppositions that prevail in (Stewart in Found Sci
15(4):395–409, 2010a) answer to the question “are we in the midst of a developmental process?” as expressed in his statement “that humanity has
discovered the trajectory of past evolution and can see how it is likely to continue in the future”. 相似文献
9.
10.
The issue of determining “the right number of clusters” in K-Means has attracted considerable interest, especially in the
recent years. Cluster intermix appears to be a factor most affecting the clustering results. This paper proposes an experimental
setting for comparison of different approaches at data generated from Gaussian clusters with the controlled parameters of
between- and within-cluster spread to model cluster intermix. The setting allows for evaluating the centroid recovery on par
with conventional evaluation of the cluster recovery. The subjects of our interest are two versions of the “intelligent” K-Means method, ik-Means, that find the “right” number of clusters by extracting “anomalous patterns” from the data one-by-one. We compare them
with seven other methods, including Hartigan’s rule, averaged Silhouette width and Gap statistic, under different between-
and within-cluster spread-shape conditions. There are several consistent patterns in the results of our experiments, such
as that the right K is reproduced best by Hartigan’s rule – but not clusters or their centroids. This leads us to propose an adjusted version
of iK-Means, which performs well in the current experiment setting. 相似文献
11.
György Darvas 《Foundations of Science》2009,14(4):273-280
Attemts to explain causal paradoxes of Quantum Mechanics (QM) have tried to solve the problems within the framework of Quantum
Electrodynamics (QED). We will show, that this is impossible. The original theory of QED by Dirac (Proc Roy Soc A117:610,
1928) formulated in its preamble four preliminary requirements that the new theory should meet. The first of these requirements
was that the theory must be causal. Causality is not to be derived as a consequence of the theory since it was a precondition
for the formulation of the theory; it has been constructed so that it be causal. Therefore, causal paradoxes logically cannot
be explained within the framework of QED. To transcend this problem we should consider the following points: Dirac himself
stated in his original paper (1928) that his theory was only an approximation. When he returned to improve the theory later
(Proc Roy Soc A209, 1951), he noted that the new theory “involves only the ratio e/m, not e and m separately”. This is a sign that although the electromagnetic effects (whose source is e) are magnitudes stronger than the gravitational effects (whose source is m), the two are coupled. Already in 1919, Einstein noted that “the elementary formations which go to make up the atom” are
influenced by gravitational forces. Although in that form the statement proved not to be exactly correct, the effects of gravitation
on QM phenomena have been established. The conclusion is that we should seek a resolution for the causal paradoxes in the
framework of the General Theory of Relativity (GTR)—in contrast to QED, which involves only the Special Theory of Relativity
(STR). We show that causality is necessarily violated in GTR. This follows from the curvature of the space-time. Although
those effects are very small, one cannot ignore their influence in the case of the so-called “paradox phenomena”. 相似文献
12.
Laurent Nottale 《Foundations of Science》2011,16(4):307-309
We give a “direction for use” of the scale relativity theory and apply it to an example of spontaneous multiscale integration
including four embedded levels of organization (intracellular, cell, tissue and organism-like levels). We conclude by an update
of our analysis of the arctic sea ice melting. 相似文献
13.
The introduction of the notion of family resemblance represented a major shift in Wittgenstein’s thoughts on the meaning of
words, moving away from a belief that words were well defined, to a view that words denoted less well defined categories of
meaning. This paper presents the use of the notion of family resemblance in the area of machine learning as an example of
the benefits that can accrue from adopting the kind of paradigm shift taken by Wittgenstein. The paper presents a model capable
of learning exemplars using the principle of family resemblance and adopting Bayesian networks for a representation of exemplars.
An empirical evaluation is presented on three data sets and shows promising results that suggest that previous assumptions
about the way we categories need reopening.
相似文献
Sunil VaderaEmail: |
14.
The Self-Organization of Time and Causality: Steps Towards Understanding the Ultimate Origin 总被引:2,自引:2,他引:0
Francis Heylighen 《Foundations of Science》2010,15(4):345-356
Possibly the most fundamental scientific problem is the origin of time and causality. The inherent difficulty is that all
scientific theories of origins and evolution consider the existence of time and causality as given. We tackle this problem
by starting from the concept of self-organization, which is seen as the spontaneous emergence of order out of primordial chaos.
Self-organization can be explained by the selective retention of invariant or consistent variations, implying a breaking of
the initial symmetry exhibited by randomness. In the case of time, we start from a random graph connecting primitive “events”.
Selection on the basis of consistency eliminates cyclic parts of the graph, so that transitive closure can transform it into
a partial order relation of precedence. Causality is assumed to be carried by causal “agents” which undergo a more traditional
variation and selection, giving rise to causal laws that are partly contingent, partly necessary. 相似文献
15.
16.
Andreas Dress Katharina T. Huber Jacobus Koolen Vincent Moulton Andreas Spillner 《Journal of Classification》2010,27(2):158-172
The theory of the tight span, a cell complex that can be associated to every metric D, offers a unifying view on existing approaches for analyzing distance data, in particular for decomposing a metric D into a sum of simpler metrics as well as for representing it by certain specific edge-weighted graphs, often referred to
as realizations of D. Many of these approaches involve the explicit or implicit computation of the so-called cutpoints of (the tight span of)
D, such as the algorithm for computing the “building blocks” of optimal realizations of D recently presented by A. Hertz and S. Varone. The main result of this paper is an algorithm for computing the set of these
cutpoints for a metric D on a finite set with n elements in O(n3) time. As a direct consequence, this improves the run time of the aforementioned O(n6)-algorithm by Hertz and Varone by “three orders of magnitude”. 相似文献
17.
Dan Mcarthur 《Foundations of Science》2006,11(4):369-397
In recent years a general consensus has been developing in the philosophy of science to the effect that strong social constructivist
accounts are unable to adequately account for scientific practice. Recently, however, a number of commentators have formulated
an attenuated version of constructivism that purports to avoid the difficulties that plague the stronger claims of its predecessors.
Interestingly this attenuated form of constructivism finds philosophical support from a relatively recent turn in the literature
concerning scientific realism. Arthur Fine and a number of other commentators have argued that the realism debate ought to
be abandoned. The rationale for this argument is that the debate is sterile for it has, it is claimed, no consequence for
actual scientific practice, and therefore does not advance our understanding of science or its practice. Recent “softer” accounts
of social constructivism also hold a similar agnostic stance to the realism question. I provide a survey of these various
agnostic stances and show how they form a general position that I shall refer to as “the anti-philosophical stance”. I then
demonstrate that the anti-philosophical stance fails by identifying difficulties that attend its proposal to ban philosophical
interpretation. I also provide examples of instances where philosophical stances to the realism question affect scientific
practice. 相似文献
18.
Brigitte Cambon de Lavalette Charles Tijus Christine Leproux Olivier Bauer 《Foundations of Science》2005,10(1):25-45
Taxonomy Based modeling was applied to describe drivers’ mental models of variable message signs (VMS’s) displayed on expressways.
Progress in road telematics has made it possible to introduce variable message signs (VMS’s). Sensors embedded in the carriageway
every 500m record certain variables (speed, flow rate, etc.) that are transformed in real time into “driving times” to a given
destination if road conditions do not change.
VMS systems are auto-regulative Man-Machine (AMMI) systems which incorporate a model of the user: if the traffic flow is too
high, then drivers should choose alternative routes. In so doing, the traffic flow should decrease. The model of the user
is based on suppositions such as: people do not like to waste time, they fully understand the displayed messages, they trust
the displayed values, they know of alternative routes. However, people also have a model of the way the system functions.
And if they do not believe the contents of the message, they will not act as expected.
We collected data through interviews with drivers using the critical incidents technique (Flanagan, 1985). Results show that
the mental models that drivers have of the way the VMS system works are various but not numerous and that most of them differ
from the“ideal expert” mental model. It is clear that users don’t have an adequate model of how the VMS system works and that
VMS planners have a model of user behaviour that does not correspond to the behaviour of the drivers we interviewed. Finally,
Taxonomy Based Modeling is discussed as a tool for mental model remediation. 相似文献
19.
Giuseppe Longo 《Foundations of Science》2011,16(4):331-333
This short note develops some ideas along the lines of the stimulating paper by Heylighen (Found Sci 15 4(3):345–356, 2010a). It summarizes a theme in several writings with Francis Bailly, downloadable from this author’s web page. The “geometrization”
of time and causality is the common ground of the analysis hinted here and in Heylighen’s paper. Heylighen adds a logical
notion, consistency, in order to understand a possible origin of the selective process that may have originated this organization
of natural phenomena. We will join our perspectives by hinting to some gnoseological complexes, common to mathematics and
physics, which may shed light on the issues raised by Heylighen. 相似文献
20.
Ivan M. Havel 《Foundations of Science》1998,3(2):375-394
Certain cognitive and philosophical aspects of the concept of conceivability with intended or established diversion from (putative)
reality are discussed. The “coherence gap problem” arises when certain fragments of the real world are replaced with imaginary
situations while most details are (intentionally or not) ignored. Another issue, “the spectator problem”, concerns the participation
of the conceiver himself in the world conceived. Three different examples of conceivability are used to illustrate our points,
namely thought experiments in physics, a hypothetical world devoid of consciousness (zombie world), and virtual reality.
This revised version was published online in July 2006 with corrections to the Cover Date. 相似文献