首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Recent years have seen the development of an approach both to general philosophy and philosophy of science often referred to as ‘experimental philosophy’ or just ‘X-Phi’. Philosophers often make or presuppose empirical claims about how people would react to hypothetical cases, but their evidence for claims about what ‘we’ would say is usually very limited indeed. Philosophers of science have largely relied on their more or less intimate knowledge of their field of study to draw hypothetical conclusions about the state of scientific concepts and the nature of conceptual change in science. What they are lacking is some more objective quantitative data supporting their hypotheses. A growing number of philosophers (of science), along with a few psychologists and anthropologists, have tried to remedy this situation by designing experiments aimed at systematically exploring people’s reactions to philosophically important thought experiments or scientists’ use of their scientific concepts. Many of the results have been surprising and some of the conclusions drawn from them have been more than a bit provocative. This symposium attempts to provide a window into this new field of philosophical inquiry and to show how experimental philosophy provides crucial tools for the philosopher and encourages two-way interactions between scientists and philosophers.  相似文献   

2.
James McAllister’s 2003 article, ‘Algorithmic randomness in empirical data’, claims that empirical data sets are algorithmically random, and hence incompressible. We show that this claim is mistaken. We present theoretical arguments and empirical evidence for compressibility, and discuss the matter in the framework of Minimum Message Length (MML) inference.  相似文献   

3.
Two complementary debates of the turn of the nineteenth and twentieth century are examined here: the debate on the legitimacy of hypotheses in the natural sciences and the debate on intentionality and ‘representations without object’ in philosophy. Both are shown to rest on two core issues: the attitude of the subject and the mode of presentation chosen to display a domain of phenomena. An orientation other than the one which contributed to shape twentieth-century philosophy of science is explored through the analysis of the role given to assumptions in Boltzmann’s research strategy, where assumptions are contrasted to hypotheses, axioms, and principles, and in Meinong’s criticism of the privileged status attributed to representations in mental activities. Boltzmann’s computational style in mathematics and Meinong’s criticism of the confusion between representation and judgment give prominence to an indirect mode of presentation, adopted in a state of suspended belief which is characteristic of assumptions and which enables one to grasp objects that cannot be reached through direct representation or even analogies. The discussion shows how assumptions and the movement to fiction can be essential steps in the quest for objectivity. The conclusion restates the issues of the two debates in a contemporary perspective and shows how recent developments in philosophy of science and philosophy of language and mind can be brought together by arguing for a twofold conception of reference.  相似文献   

4.
David Stump (2007) has recently argued that Pierre Duhem can be interpreted as a virtue epistemologist. Stump’s claims have been challenged by Milena Ivanova (2010) on the grounds that Duhem’s ‘epistemic aims’ are more modest than those of virtue epistemologists. I challenge Ivanova’s criticism of Stump by arguing that she not distinguish between ‘reliabilist’ and ‘responsibilist’ virtue epistemologies. Once this distinction is drawn, Duhem clearly emerges as a ‘virtue-responsibilist’ in a way that complements Ivanova’s positive proposal that Duhem’s ‘good sense’ reflects a conception of the ‘ideal scientist’. I support my proposal that Duhem is a ‘virtue-responsibilist’ by arguing that his rejection of the possibility of our producing a ‘perfect theory’ reflects the key responsibilist virtue of ‘intellectual humility’.  相似文献   

5.
The subject of this investigation is the role of conventions in the formulation of Thomas Reid’s theory of the geometry of vision, which he calls the ‘geometry of visibles’. In particular, we will examine the work of N. Daniels and R. Angell who have alleged that, respectively, Reid’s ‘geometry of visibles’ and the geometry of the visual field are non-Euclidean. As will be demonstrated, however, the construction of any geometry of vision is subject to a choice of conventions regarding the construction and assignment of its various properties, especially metric properties, and this fact undermines the claim for a unique non-Euclidean status for the geometry of vision. Finally, a suggestion is offered for trying to reconcile Reid’s direct realist theory of perception with his geometry of visibles.While Thomas Reid is well-known as the leading exponent of the Scottish ‘common-sense’ school of philosophy, his role in the history of geometry has only recently been drawing the attention of the scholarly community. In particular, several influential works, by N. Daniels and R. B. Angell, have claimed Reid as the discoverer of non-Euclidean geometry; an achievement, moreover, that pre-dates the geometries of Lobachevsky, Bolyai, and Gauss by over a half century. Reid’s alleged discovery appears within the context of his analysis of the geometry of the visual field, which he dubs the ‘geometry of visibles’. In summarizing the importance of Reid’s philosophy in this area, Daniels is led to conclude that ‘there can remain little doubt that Reid intends the geometry of visibles to be an alternative to Euclidean geometry’;1 while Angell, similarly inspired by Reid, draws a much stronger inference: ‘The geometry which precisely and naturally fits the actual configurations of the visual field is a non-Euclidean, two-dimensional, elliptical geometry. In substance, this thesis was advanced by Thomas Reid in 1764 ...’2 The significance of these findings has not gone unnoticed in mathematical and scientific circles, moreover, for Reid’s name is beginning to appear more frequently in historical surveys of the development of geometry and the theories of space.3Implicit in the recent work on Reid’s ‘geometry of visibles’, or GOV, one can discern two closely related but distinct arguments: first, that Reid did in fact formulate a non-Euclidean geometry, and second, that the GOV is non-Euclidean. This essay will investigate mainly the latter claim, although a lengthy discussion will be accorded to the first. Overall, in contrast to the optimistic reports of a non-Euclidean GOV, it will be argued that there is a great deal of conceptual freedom in the construction of any geometry pertaining to the visual field. Rather than single out a non-Euclidean structure as the only geometry consistent with visual phenomena, an examination of Reid, Daniels, and Angell will reveal the crucial role of geometric ‘conventions’, especially of the metric sort, in the formulation of the GOV (where a ‘metric’ can be simply defined as a system for determining distances, the measures of angles, etc.). Consequently, while a non-Euclidean geometry is consistent with Reid’s GOV, it is only one of many different geometrical structures that a GOV can possess. Angell’s theory that the GOV can only be construed as non-Euclidean, is thus incorrect. After an exploration of Reid’s theory and the alleged non-Euclidean nature of the GOV, in 1 and 2 respectively, the focus will turn to the tacit role of conventionalism in Daniels’ reconstruction of Reid’s GOV argument, and in the contemporary treatment of a non-Euclidean visual geometry offered by Angell ( 3 and 4). Finally, in the conclusion, a suggestion will be offered for a possible reconstruction of Reid’s GOV that does not violate his avowed ‘direct realist’ theory of perception, since this epistemological thesis largely prompted his formulation of the GOV.  相似文献   

6.
Turner [The past vs. the tiny: Historical science and the abductive arguments for realism. Studies in History and Philosophy of Science 35A (2004) 1] claims that the arguments in favor of realism do not support with the same force both classes of realism, since they supply stronger reasons for experimental realism than for historical realism. I would like to make two comments, which should be seen as amplifications inspired by his proposal, rather than as a criticism. First, it is important to highlight that Turner’s distinction between ‘tiny’ and ‘past unobservables’ is neither excluding nor exhaustive. Second, even if we agreed with everything that Turner says regarding the arguments for realism and their relative weight in order to justify the experimental or historical version, there is an aspect that Turner does not consider and that renders historical realism less problematic than experimental realism.  相似文献   

7.
In this paper, I examine William Whewell’s (1794–1866) ‘Discoverer’s Induction’, and argue that it supplies a strikingly accurate characterization of the logic behind many statistical methods, exploratory data analysis (EDA) in particular. Such methods are additionally well-suited as a point of evaluation of Whewell’s philosophy since the central techniques of EDA were not invented until after Whewell’s death, and so couldn’t have influenced his views. The fact that the quantitative details of some very general methods designed to suggest hypotheses would so closely resemble Whewell’s views of how theories are formed is, I suggest, a strongly positive comment on his views.  相似文献   

8.
The acceptance of Newton’s ideas and Newtonianism in the early German Enlightenment is usually described as hesitant and slow. Two reasons help to explain this phenomenon. One is that those who might have adopted Newtonian arguments were critics of Wolffianism. These critics, however, drew on indigenous currents of thought, pre-dating the reception of Newton in Germany and independent of Newtonian science. The other reason is that the controversies between Wolffians and their critics focused on metaphysics. Newton’s reputation, however, was that of a mathematician, and one point, on which Wolffians and their opponents agreed, was that mathematics was of no use in the solution of metaphysical questions. The appeal to Newton as an authority in metaphysics, it was argued, was the fault of Newton’s over-zealous disciples in Britain, who tried to transform him from a mathematician into the author of a general philosophical system. It is often argued that the Berlin Academy after 1743 included a Newtonian group, but even there the reception of Newtonianism was selective. Philosophers such as Leonhard Euler were also reluctant to be labelled ‘Newtonians’, because this implied a dogmatic belief in Newton’s ideas. Only after the mid-eighteenth century is ‘Newtonianism’ increasingly accepted in the sense of a philosophical system.  相似文献   

9.
Astronomy has a long tradition of translating data into different visual representations and scholars have noted a division between ‘pretty pictures’ and scientific images. A series of drawings and engravings of M51 derived from Lord Rosse’s observations at Birr Castle and Hubble Space Telescope images of the same object offer an opportunity to examine shifts in the object’s representation within a given period, as well as over the long history of observing it. This demonstrates both the consistent interest of astronomy in structure and improved resolution, as well as the subjective treatment of light and color. Furthermore, while the distinction between ‘pretty pictures’ and scientific images offers a starting point for analyzing the translations within a given period, the line between the two blurs, suggesting the complexity of aesthetic choices within astronomical images.  相似文献   

10.
The ‘received view’ about computation is that all computations must involve representational content. Egan and Piccinini argue against the received view. In this paper, I focus on Egan’s arguments, claiming that they fall short of establishing that computations do not involve representational content. I provide positive arguments explaining why computation has to involve representational content, and how that representational content may be of any type (distal, broad, etc.). I also argue (contra Egan and Fodor) that there is no need for computational psychology to be individualistic. Finally, I draw out a number of consequences for computational individuation, proposing necessary conditions on computational identity and necessary and sufficient conditions on computational I/O equivalence of physical systems.  相似文献   

11.
In this paper I argue that the Strong Programme’s aim to provide robust explanations of belief acquisition is limited by its commitment to the symmetry principle. For Bloor and Barnes, the symmetry principle is intended to drive home the fact that epistemic norms are socially constituted. My argument here is that even if our epistemic standards are fully naturalized—even relativized—they nevertheless can play a pivotal role in why individuals adopt the beliefs that they do. Indeed, sometimes the fact that a belief is locally endorsed as rational is the only reason why an individual holds it. In this way, norms of rationality have a powerful and unique role in belief formation. But if this is true then the symmetry principle’s emphasis on ‘sameness of type’ is misguided. It has the undesirable effect of not just naturalizing our cognitive commitments, but trivializing them. Indeed, if the notion of ‘similarity’ is to have any content, then we are not going to classify as ‘the same’ beliefs that are formed in accordance with deeply entrenched epistemic norms as ones formed without reflection on these norms, or ones formed in spite of these norms. My suggestion here is that we give up the symmetry principle in favor of a more sophisticated principle, one that allows for a taxonomy of causes rich enough to allow us to delineate the unique impact epistemic norms have on those individuals who subscribe to them.  相似文献   

12.
13.
In his response to my (2010), Ian Kidd claims that my argument against Stump’s interpretation of Duhem’s concept of ‘good sense’ is unsound because it ignores an important distinction within virtue epistemology. In light of the distinction between reliabilist and responsibilist virtue epistemology, Kidd argues that Duhem can be seen as supporting the latter, which he further illustrates with a discussion of Duhem’s argument against ‘perfect theory’. I argue that no substantive argument is offered to show that the distinction is relevant and can establish that Duhem’s ‘good sense’ can be understood within responsibilist virtue epistemology. I furthermore demonstrate that Kidd’s attempt to support his contention relies on a crucial misreading of Duhem’s general philosophy of science, and in doing so highlight the importance of understanding ‘good sense’ in its original context, that of theory choice.  相似文献   

14.
In a recent paper, Luc Faucher and others have argued for the existence of deep cultural differences between ‘Chinese’ and ‘East Asian’ ways of understanding the world and those of ‘ancient Greeks’ and ‘Americans’. Rejecting Alison Gopnik’s speculation that the development of modern science was driven by the increasing availability of leisure and information in the late Renaissance, they claim instead—following Richard Nisbett—that the birth of mathematical science was aided by ‘Greek’, or ‘Western’, cultural norms that encouraged analytic, abstract and rational theorizing. They argue that ‘Chinese’ and ‘East Asian’ cultural norms favoured, by contrast, holistic, concrete and dialectical modes of thinking. After clarifying some of the things that can be meant by ‘culture’ and ‘mentality’, the present paper shows that Faucher and his colleagues make a number of appeals—to the authority of comparative studies and history of science, to the psychological studies of Nisbett and his colleagues, and to a hidden assumption of strong cultural continuity in the West. It is argued that every one of these appeals is misguided, and, further, that the psychological findings of Nisbett and others have little bearing on questions concerning the origins of modern science. Finally, it is suggested that the ‘Needham question’ about why the birth of modern science occurred in Europe rather than anywhere else is itself multiply confused to the extent that it may express no significant query.  相似文献   

15.
This paper examines the origin, range and meaning of the Principle of Action and Reaction in Kant’s mechanics. On the received view, it is a version of Newton’s Third Law. I argue that Kant meant his principle as foundation for a Leibnizian mechanics. To find a ‘Newtonian’ law of action and reaction, we must look to Kant’s ‘dynamics,’ or theory of matter.  相似文献   

16.
Historians have long sought putative connections between different areas of Newton’s scientific work, while recently scholars have argued that there were causal links between even more disparate fields of his intellectual activity. In this paper I take an opposite approach, and attempt to account for certain tensions in Newton’s ‘scientific’ work by examining his great sensitivity to the disciplinary divisions that both conditioned and facilitated his early investigations in science and mathematics. These momentous undertakings, exemplified by research that he wrote up in two separate notebooks, obey strict distinctions between approaches appropriate to both new and old ‘natural philosophy’ and those appropriate to the mixed mathematical sciences. He retained a fairly rigid demarcation between them until the early eighteenth century. At the same time as Newton presented the ‘mathematical principles’ of natural philosophy in his magnum opus of 1687, he remained equally committed to a separate and more private world or ontology that he publicly denigrated as hypothetical or conjectural. This is to say nothing of the worlds implicit in his work on mathematics and alchemy. He did not lurch from one overarching ontological commitment to the next (for example, moving tout court from radical aetherial explanations to strictly vacuist accounts) but instead simultaneously—and often radically—developed generically distinct concepts and ontologies that were appropriate to specific settings and locations (for example, private, qualitative, causal natural philosophy versus public quantitative mixed mathematics) as well as to relevant styles of argument. Accordingly I argue that the concepts used by Newton throughout his career were intimately bound up with these appropriate generic or quasi-disciplinary ‘structures’. His later efforts to bring together active principles, aethers and voids in various works were not failures that resulted from his ‘confusion’ but were bold attempts to meld together concepts or ontologies that belonged to distinct enquiries. His analysis could not be ‘coherent’ because the structures in which they appeared were fundamentally incompatible.  相似文献   

17.
It is generally accepted that Popper‘s degree of corroboration, though “inductivist” in a very general and weak sense, is not inductivist in a strong sense, i.e. when by ‘inductivism’ we mean the thesis that the right measure of evidential support has a probabilistic character. The aim of this paper is to challenge this common view by arguing that Popper can be regarded as an inductivist, not only in the weak broad sense but also in a narrower, probabilistic sense. In section 2, first, I begin by briefly characterizing the relevant notion of inductivism that is at stake here; second, I present and discuss the main Popperian argument against it and show that in the only reading in which the argument is formally it is restricted to cases of predicted evidence, and that even if restricted in this way the argument is formally valid it is nevertheless materially unsound. In section 3, I analyze the desiderata that, according to Popper, any acceptable measure for evidential support must satisfy, I clean away its ad-hoc components and show that all the remaining desiderata are satisfied by inductuvist-in-strict-sense measures. In section 4 I demonstrate that two of these desiderata, accepted by Popper, imply that in cases of predicted evidence any measure that satisfies them is qualitatively indistinguishable from conditional probability. Finally I defend that this amounts to a kind of strong inductivism that enters into conflict with Popper’s anti-inductivist argument and declarations, and that this conflict does not depend on the incremental versus non-incremental distinction for evidential-support measures, making Popper’s position inconsistent in any reading.  相似文献   

18.
This paper offers a solution to a problem in Herschel studies by drawing on the dynamic frame model for concept representation offered by cognitive psychology. Applying the frame model to represent the conceptual frameworks of the particle and wave theories, this paper shows that discontinuity between the particle and wave frameworks consists mainly in the transition from a particle notion ‘side’ to a wave notion ‘phase difference’. By illustrating intraconceptual relations within concepts, the frame representations reveal the ontological differences between these two concepts. ‘Side’ is an object concept built on spatial relations, but ‘phase difference’ is an event concept built on temporal relations. The conceptual analyses display a possible cognitive source of Herschel’s misconception of polarization. Limited by his experimental works and his philosophical beliefs, Herschel comprehended polarization solely in terms of spatial relations, which prevented him from replacing the object concept ‘side’ with the event concept ‘phase difference’, and eventually resulted in his failure to understand the wave account of polarization.  相似文献   

19.
In 1895 sociologist and philosopher Georg Simmel published a paper: ‘On a connection of selection theory to epistemology’. It was focussed on the question of how behavioural success and the evolution of the cognitive capacities that underlie it are to be related to knowing and truth. Subsequently, Simmel’s ideas were largely lost, but recently (2002) an English translation was published by Coleman in this journal. While Coleman’s contextual remarks are solely concerned with a preceding evolutionary epistemology, it will be argued here that Simmel pursues a more unorthodox, more radically biologically based and pragmatist, approach to epistemology in which the presumption of a wholly interests-independent truth is abandoned, concepts are accepted as species-specific and truth tied intimately to practical success. Moreover, Simmel’s position, shorn of one too-radical commitment, shares its key commitments with the recently developed interactivist–constructivist framework for understanding biological cognition and naturalistic epistemology. There Simmel’s position can be given a natural, integrated, three-fold elaboration in interactivist re-analysis, unified evolutionary epistemology and learnable normativity.  相似文献   

20.
Kant’s philosophy of science takes on sharp contour in terms of his interaction with the practicing life scientists of his day, particularly Johann Blumenbach and the latter’s student, Christoph Girtanner, who in 1796 attempted to synthesize the ideas of Kant and Blumenbach. Indeed, Kant’s engagement with the life sciences played a far more substantial role in his transcendental philosophy than has been recognized hitherto. The theory of epigenesis, especially in light of Kant’s famous analogy in the first Critique (B167), posed crucial questions regarding the ‘looseness of fit’ between the constitutive and the regulative in Kant’s theory of empirical law. A detailed examination of Kant’s struggle with epigenesis between 1784 and 1790 demonstrates his grave reservations about its hylozoist implications, leading to his even stronger insistence on the discrimination of constitutive from regulative uses of reason. The continuing relevance of these issues for Kant’s philosophy of science is clear from the work of Buchdahl and its contemporary reception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号