共查询到20条相似文献,搜索用时 101 毫秒
1.
Computational empiricism 总被引:1,自引:1,他引:0
Paul Humphreys 《Foundations of Science》1995,1(1):119-130
I argue here for a number of ways that modern computational science requires a change in the way we represent the relationship between theory and applications. It requires a switch away from logical reconstruction of theories in order to take surface mathematical syntax seriously. In addition, syntactically different versions of the same theory have important differences for applications, and this shows that the semantic account of theories is inappropriate for some purposes. I also argue against formalist approaches in the philosophy of science and for a greater role for perceptual knowledge rather than propositional knowledge in scientific empiricism.The term computational empiricism was suggested to me in conversation at a philosophy conference in Venice, Italy in June 1991 by someone whose name I have unfortunately forgotten. It seemed to capture perfectly the set of techniques I had described in my talk there, and I have since adopted it. I thank the originator of this term, whoever he is. 相似文献
2.
Seungbae Park 《Foundations of Science》2011,16(1):21-30
Putnam in Realism in mathematics and Elsewhere, Cambridge University Press, Cambridge (1975) infers from the success of a
scientific theory to its approximate truth and the reference of its key term. Laudan in Philos Sci 49:19–49 (1981) objects
that some past theories were successful, and yet their key terms did not refer, so they were not even approximately true.
Kitcher in The advancement of science, Oxford University Press, New York (1993) replies that the past theories are approximately
true because their working posits are true, although their idle posits are false. In contrast, I argue that successful theories
which cohere with each other are approximately true, and that their key terms refer. My position is immune to Laudan’s counterexamples
to Putnam’s inference and yields a solution to a problem with Kitcher’s position. 相似文献
3.
John J. Sung 《Foundations of Science》2008,13(2):177-193
Scientific anomalies are observations and facts that contradict current scientific theories and they are instrumental in scientific
theory change. Philosophers of science have approached scientific theory change from different perspectives as Darden (Theory
change in science: Strategies from Mendelian genetics, 1991) observes: Lakatos (In: Lakatos, Musgrave (eds) Criticism and
the growth of knowledge, 1970) approaches it as a progressive “research programmes” consisting of incremental improvements
(“monster barring” in Lakatos, Proofs and refutations: The logic of mathematical discovery, 1976), Kuhn (The structure of
scientific revolutions, 1996) observes that changes in “paradigms” are instigated by a crisis from some anomaly, and Hanson
(In: Feigl, Maxwell (eds) Current issues in the philosophy of science, 1961) proposes that discovery does not begin with hypothesis
but with some “problematic phenomena requiring explanation”. Even though anomalies are important in all of these approaches
to scientific theory change, there have been only few investigations into the specific role anomalies play in scientific theory
change. Furthermore, much of these approaches focus on the theories themselves and not on how the scientists and their experiments
bring about scientific change (Gooding, Experiment and the making of meaning: Human agency in scientific observation and experiment,
1990). To address these issues, this paper approaches scientific anomaly resolution from a meaning construction point of view.
Conceptual integration theory (Fauconnier and Turner, Cogn Sci 22:133–187, 1996; The way we think: Conceptual blending and
mind’s hidden complexities, 2002) from cognitive linguistics describes how one constructs meaning from various stimuli, such
as text and diagrams, through conceptual integration or blending. The conceptual integration networks that describe the conceptual
integration process characterize cognition that occurs unconsciously during meaning construction. These same networks are
used to describe some of the cognition while resolving an anomaly in molecular genetics called RNA interference (RNAi) in
a case study. The RNAi case study is a cognitive-historical reconstruction (Nersessian, In: Giere (ed) Cognitive models of
science, 1992) that reconstructs how the RNAi anomaly was resolved. This reconstruction traces four relevant molecular genetics
publications in describing the cognition necessary in accounting for how RNAi was resolved through strategies (Darden 1991),
abductive reasoning (Peirce, In: Hartshorne, Weiss (eds) Collected papers, 1958), and experimental reasoning (Gooding 1990).
The results of the case study show that experiments play a crucial role in formulating an explanation of the RNAi anomaly
and the integration networks describe the experiments’ role. Furthermore, these results suggest that RNAi anomaly resolution
is embodied. It is embodied in a sense that cognition described in the cognitive-historical reconstruction is experientially
based.
相似文献
John J. SungEmail: |
4.
Krzysztof Wójtowicz 《Foundations of Science》1998,3(2):207-229
In this article the problem of unification of mathematical theories is discussed. We argue, that specific problems arise here, which are quite different than the problems in the case of empirical sciences. In particular, the notion of unification depends on the philosophical standpoint. We give an analysis of the notion of unification from the point of view of formalism, Gödel's platonism and Quine's realism. In particular we show, that the concept of “having the same object of study” should be made precise in the case of mathematical theories. In the appendix we give a working proposal of a certain understanding of this notion. 相似文献
5.
The process of abstraction and concretisation is a label used for an explicative theory of scientific model-construction. In scientific theorising this process enters
at various levels. We could identify two principal levels of abstraction that are useful to our understanding of theory-application.
The first level is that of selecting a small number of variables and parameters abstracted from the universe of discourse
and used to characterise the general laws of a theory. In classical mechanics, for example, we select position and momentum and establish a relation amongst the two variables, which we call Newton’s 2nd law. The specification of the unspecified
elements of scientific laws, e.g. the force function in Newton’s 2nd law, is what would establish the link between the assertions
of the theory and physical systems. In order to unravel how and with what conceptual resources scientific models are constructed,
how they function and how they relate to theory, we need a view of theory-application that can accommodate our constructions
of representation models. For this we need to expand our understanding of the process of abstraction to also explicate the
process of specifying force functions etc. This is the second principal level at which abstraction enters in our theorising
and in which I focus. In this paper, I attempt to elaborate a general analysis of the process of abstraction and concretisation
involved in scientific- model construction, and argue why it provides an explication of the construction of models of the
nuclear structure. 相似文献
6.
Vadim Batitsky 《Foundations of Science》2000,5(3):299-321
Putnam's ``model-theoretic' argument against metaphysical realism presupposes that an ideal scientific theory is expressible in a first order language. The central aim of this paper is to show that Putnam's ``first orderization' of science, although unchallenged by numerous critics, makes his argument unsound even for adequate theories, never mind an ideal one. To this end, I will argue that quantitative theories, which dominate the natural sciences, can be adequately interpreted and evaluated only with the help of so-called theories of measurement whose epistemological and methodological purpose is to justify systematic assignments of quantitative values to objects in the world. And, in order to fulfill this purpose, theories of measurement must have an essentially higher order logical structure. As a result, Putnam's argument fails because much of science turns out to rest on essentially higher order theoretical assumptions about the world. 相似文献
7.
Gianluigi Oliveri 《Foundations of Science》2006,11(1-2):41-79
The present paper aims at showing that there are times when set theoretical knowledge increases in a non-cumulative way. In other words, what we call ‘set theory’ is not one theory which grows by simple addition of a theorem after the other, but a finite sequence of theories T 1, ..., T n in which T i+1, for 1 ≤ i < n, supersedes T i . This thesis has a great philosophical significance because it implies that there is a sense in which mathematical theories, like the theories belonging to the empirical sciences, are fallible and that, consequently, mathematical knowledge has a quasi-empirical nature. The way I have chosen to provide evidence in favour of the correctness of the main thesis of this article consists in arguing that Cantor–Zermelo set theory is a Lakatosian Mathematical Research Programme (MRP). 相似文献
8.
Massimiliano Badino 《Foundations of Science》2006,11(4):323-347
The foundation of statistical mechanics and the explanation of the success of its methods rest on the fact that the theoretical
values of physical quantities (phase averages) may be compared with the results of experimental measurements (infinite time
averages). In the 1930s, this problem, called the ergodic problem, was dealt with by ergodic theory that tried to resolve
the problem by making reference above all to considerations of a dynamic nature. In the present paper, this solution will
be analyzed first, highlighting the fact that its very general nature does not duly consider the specificities of the systems
of statistical mechanics. Second, Khinchin’s approach will be presented, that starting with more specific assumptions about
the nature of systems, achieves an asymptotic version of the result obtained with ergodic theory. Third, the statistical meaning
of Khinchin’s approach will be analyzed and a comparison between this and the point of view of ergodic theory is proposed.
It will be demonstrated that the difference consists principally of two different perspectives on the ergodic problem: that
of ergodic theory puts the state of equilibrium at the center, while Khinchin’s attempts to generalize the result to non-equilibrium
states. 相似文献
9.
Ontological Frameworks for Scientific Theories 总被引:1,自引:0,他引:1
Jonas R. B. Arenhart 《Foundations of Science》2012,17(4):339-356
A close examination of the literature on ontology may strike one with roughly two distinct senses of this word. According to the first of them, which we shall call traditional ontology, ontology is characterized as the a priori study of various “ontological categories”. In a second sense, which may be called naturalized ontology, ontology relies on our best scientific theories and from them it tries to derive the ultimate furniture of the world. From a methodological point of view these two senses of ontology are very far away. Here, we discuss a possible relationship between these senses and argue that they may be made compatible and complement each other. We also examine how logic, understood as a linguistic device dealing with the conceptual framework of a theory and its basic inference patterns must be taken into account in this kind of study. The idea guiding our proposal may be put as follows: naturalized ontology checks for the applicability of the ontological categories proposed by traditional ontology and give substantial feedback for it. The adequate expression of some of the resulting ontological frameworks may require a different logic. We conclude with a discussion of the case of orthodox quantum mechanics, arguing that this theory exemplifies the kind of relationship between the two senses of ontology. We also argue that the view proposed here may throw some light in ontological questions concerning this theory. 相似文献
10.
Sciences are often regarded as providing the best, or, ideally, exact, knowledge of the world, especially in providing laws
of nature. Ilya Prigogine, who was awarded the Nobel Prize for his theory of non-equilibrium chemical processes—this being
also an important attempt to bridge the gap between exact and non-exact sciences [mentioned in the Presentation Speech by
Professor Stig Claesson (nobelprize.org, The Nobel Prize in Chemistry 1977)]—has had this ideal in mind when trying to formulate
a new kind of science. Philosophers of science distinguish theory and reality, examining relations between these two. Nancy
Cartwright’s distinction of fundamental and phenomenological laws, Rein Vihalemm’s conception of the peculiarity of the exact
sciences, and Ronald Giere’s account of models in science and science as a set of models are deployed in this article to criticise
the common view of science and analyse Ilya Prigogine’s view in particular. We will conclude that on a more abstract, philosophical
level, Prigogine’s understanding of science doesn’t differ from the common understanding.
相似文献
Piret KuuskEmail: |
11.
Dimensionality reduction techniques are used for representing higher dimensional data by a more parsimonious and meaningful
lower dimensional structure. In this paper we will study two such approaches, namely Carroll’s Parametric Mapping (abbreviated
PARAMAP) (Shepard and Carroll, 1966) and Tenenbaum’s Isometric Mapping (abbreviated Isomap) (Tenenbaum, de Silva, and Langford,
2000). The former relies on iterative minimization of a cost function while the latter applies classical MDS after a preprocessing
step involving the use of a shortest path algorithm to define approximate geodesic distances. We will develop a measure of
congruence based on preservation of local structure between the input data and the mapped low dimensional embedding, and compare
the different approaches on various sets of data, including points located on the surface of a sphere, some data called the
"Swiss Roll data", and truncated spheres. 相似文献
12.
Federica Russo 《Foundations of Science》2006,11(3):221-247
A careful analysis of Salmon’s Theoretical Realism and van Fraassen’s Constructive Empiricism shows that both share a common
origin: the requirement of literal construal of theories inherited by the Standard View. However, despite this common starting
point, Salmon and van Fraassen strongly disagree on the existence of unobservable entities. I argue that their different ontological
commitment towards the existence of unobservables traces back to their different views on the interpretation of probability
via different conceptions of induction. In fact, inferences to statements claiming the existence of unobservable entities
are inferences to probabilistic statements, whence the crucial importance of the interpretation of probability. 相似文献
13.
Frank Waaldijk 《Foundations of Science》2005,10(3):249-324
We discuss the foundations of constructive mathematics, including recursive mathematics and intuitionism, in relation to classical
mathematics. There are connections with the foundations of physics, due to the way in which the different branches of mathematics
reflect reality. Many different axioms and their interrelationship are discussed. We show that there is a fundamental problem
in BISH (Bishop’s school of constructive mathematics) with regard to its current definition of ‘continuous function’. This problem
is closely related to the definition in BISH of ‘locally compact’. Possible approaches to this problem are discussed. Topology seems to be a key to understanding many
issues. We offer several new simplifying axioms, which can form bridges between the various branches of constructive mathematics
and classical mathematics (‘reuniting the antipodes’). We give a simplification of basic intuitionistic theory, especially
with regard to so-called ‘bar induction’. We then plead for a limited number of axiomatic systems, which differentiate between
the various branches of mathematics. Finally, in the appendix we offer BISH an elegant topological definition of ‘locally compact’, which unlike the current definition is equivalent to the usual classical
and/or intuitionistic definition in classical and intuitionistic mathematics, respectively. 相似文献
14.
Melvin S. Steinberg 《Foundations of Science》2008,13(2):163-175
Investigations with electrometers in the 1770s led Volta to envision mobile charge in electrical conductors as a compressible
fluid. A pressure-like condition in this fluid, which Volta described as the fluid’s “effort to push itself out” of its conducting
container, was the causal agent that makes the fluid move. In this paper I discuss Volta’s use of analogy and imagery in model
building, and compare with a successful contemporary conceptual approach to introducing ideas about electric potential in
instruction. The concept that today is called “electric potential” was defined mathematically by Poisson in 1811. It was understood
after about 1850 to predict the same results in conducting matter as Volta’s pressure-like concept—and to predict electrostatic
effects in the exterior space where Volta’s concept had nothing to say. Complete quantification in addition to greater generality
made the mathematical concept a superior research tool for scientists. However, its spreading use in instruction has marginalized
approaches to model building based on the analogy and imagery resources that students bring into the classroom. Data from
pre and post testing in high schools show greater conceptual and confidence gains using the new conceptual approach than using
conventional instruction. This provides evidence for reviving Volta’s compressible fluid model as an intuitive foundation
which can then be modified to include electrostatic distant action. Volta tried to modify his compressible fluid model to
include distant action, using imagery borrowed from distant heating by a flame. This project remained incomplete, because
he did not envision an external field mediating the heating. However, pursuing Volta’s strategy of model modification to completion
now enables students taught with the new conceptual approach to add distant action to an initial compressible fluid model.
I suggest that a partial correspondence to the evolving model sequence that works for beginning students can help illuminate
Volta’s use of intermediate explanatory models.
相似文献
Melvin S. SteinbergEmail: |
15.
Steffen Ducheyne 《Foundations of Science》2006,11(4):419-447
In this paper an analysis of Newton’s argument for universal gravitation is provided. In the past, the complexity of the argument
has not been fully appreciated. Recent authors like George E. Smith and William L. Harper have done a far better job. Nevertheless,
a thorough account of the argument is still lacking. Both authors seem to stress the importance of only one methodological
component. Smith stresses the procedure of approximative deductions backed-up by the laws of motion. Harper stresses “systematic
dependencies” between theoretical parameters and phenomena. I will argue that Newton used a variety of different inferential
strategies: causal parsimony considerations, deductions, demonstrative inductions, abductions and thought-experiments. Each
of these strategies is part of Newton’s famous argument. 相似文献
16.
Peter Verdée 《Foundations of Science》2013,18(4):655-680
In this paper I propose a new approach to the foundation of mathematics: non-monotonic set theory. I present two completely different methods to develop set theories based on adaptive logics. For both theories there is a finitistic non-triviality proof and both theories contain (a subtle version of) the comprehension axiom schema. The first theory contains only a maximal selection of instances of the comprehension schema that do not lead to inconsistencies. The second allows for all the instances, also the inconsistent ones, but restricts the conclusions one can draw from them in order to avoid triviality. The theories have enough expressive power to form a justification/explication for most of the established results of classical mathematics. They are therefore not limited by Gödel’s incompleteness theorems. This remarkable result is possible because of the non-recursive character of the final proofs of theorems of non-monotonic theories. I shall argue that, precisely because of the computational complexity of these final proofs, we cannot claim that non-monotonic theories are ideal foundations for mathematics. Nevertheless, thanks to their strength, first order language and the recursive dynamic (defeasible) proofs of theorems of the theory, the non-monotonic theories form (what I call) interesting pragmatic foundations. 相似文献
17.
We analyze the developments in mathematical rigor from the viewpoint of a Burgessian critique of nominalistic reconstructions.
We apply such a critique to the reconstruction of infinitesimal analysis accomplished through the efforts of Cantor, Dedekind,
and Weierstrass; to the reconstruction of Cauchy’s foundational work associated with the work of Boyer and Grabiner; and to
Bishop’s constructivist reconstruction of classical analysis. We examine the effects of a nominalist disposition on historiography,
teaching, and research. 相似文献
18.
The issue of determining “the right number of clusters” in K-Means has attracted considerable interest, especially in the
recent years. Cluster intermix appears to be a factor most affecting the clustering results. This paper proposes an experimental
setting for comparison of different approaches at data generated from Gaussian clusters with the controlled parameters of
between- and within-cluster spread to model cluster intermix. The setting allows for evaluating the centroid recovery on par
with conventional evaluation of the cluster recovery. The subjects of our interest are two versions of the “intelligent” K-Means method, ik-Means, that find the “right” number of clusters by extracting “anomalous patterns” from the data one-by-one. We compare them
with seven other methods, including Hartigan’s rule, averaged Silhouette width and Gap statistic, under different between-
and within-cluster spread-shape conditions. There are several consistent patterns in the results of our experiments, such
as that the right K is reproduced best by Hartigan’s rule – but not clusters or their centroids. This leads us to propose an adjusted version
of iK-Means, which performs well in the current experiment setting. 相似文献
19.
Uwe V. Riss 《Foundations of Science》2011,16(4):337-351
In this paper it is argued that the fundamental difference of the formal and the informal position in the philosophy of mathematics
results from the collision of an object and a process centric perspective towards mathematics. This collision can be overcome
by means of dialectical analysis, which shows that both perspectives essentially depend on each other. This is illustrated
by the example of mathematical proof and its formal and informal nature. A short overview of the employed materialist dialectical
approach is given that rationalises mathematical development as a process of model production. It aims at placing more emphasis
on the application aspects of mathematical results. Moreover, it is shown how such production realises subjective capacities
as well as objective conditions, where the latter are mediated by mathematical formalism. The approach is further sustained
by Polanyi’s theory of problem solving and Stegmaier’s philosophy of orientation. In particular, the tool and application
perspective illuminates which role computer-based proofs can play in mathematics. 相似文献
20.
The “DNA is a program” metaphor is still widely used in Molecular Biology and its popularization. There are good historical
reasons for the use of such a metaphor or theoretical model. Yet we argue that both the metaphor and the model are essentially
inadequate also from the point of view of Physics and Computer Science. Relevant work has already been done, in Biology, criticizing the programming paradigm. We will refer to empirical evidence
and theoretical writings in Biology, although our arguments will be mostly based on a comparison with the use of differential
methods (in Molecular Biology: a mutation or alike is observed or induced and its phenotypic consequences are observed) as
applied in Computer Science and in Physics, where this fundamental tool for empirical investigation originated and acquired
a well-justified status. In particular, as we will argue, the programming paradigm is not theoretically sound as a causal(as in Physics) or deductive(as in Programming) framework for relating the genome to the phenotype, in contrast to the physicalist and computational grounds that this paradigm claims
to propose.
相似文献
Giuseppe LongoEmail: URL: http://www.di.ens.fr/users/longo |