首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 37 毫秒
1.
Meyer originally raised the question of whether non-contextual hidden variable models can, despite the Kochen–Specker theorem, simulate the predictions of quantum mechanics to within any fixed finite experimental precision (Phys. Rev. Lett. 83 (1999) 3751). Meyer's result was extended by Kent (Phys. Rev. Lett. 83 (1999) 3755). Clifton and Kent later presented constructions of non-contextual hidden variable theories which, they argued, indeed simulate quantum mechanics in this way (Proc. Roy. Soc. Lond. A 456 (2000) 2101).These arguments have evoked some controversy. Among other things, it has been suggested that the Clifton–Kent models do not in fact reproduce correctly the predictions of quantum mechanics, even when finite precision is taken into account. It has also been suggested that careful analysis of the notion of contextuality in the context of finite precision measurement motivates definitions which imply that the Clifton–Kent models are in fact contextual. Several critics have also argued that the issue can be definitively resolved by experimental tests of the Kochen–Specker theorem or experimental demonstrations of the contextuality of Nature.One aim of this paper is to respond to and rebut criticisms of the Meyer–Clifton–Kent papers. We thus elaborate in a little more detail how the Clifton–Kent models can reproduce the predictions of quantum mechanics to arbitrary precision. We analyse in more detail the relationship between classicality, finite precision measurement and contextuality, and defend the claims that the Clifton–Kent models are both essentially classical and non-contextual. We also examine in more detail the senses in which a theory can be said to be contextual or non-contextual, and in which an experiment can be said to provide evidence on the point. In particular, we criticise the suggestion that a decisive experimental verification of contextuality is possible, arguing that the idea rests on a conceptual confusion.  相似文献   

2.
This paper1 studies the different conceptions of both centrality and the principle or starting point of motion in the Universe held by Aristotle and later on by Copernicanism until Kepler and Bruno. According to Aristotle, the true centre of the Universe is the sphere of the fixed stars. This is also the starting point of motion. From this point of view, the diurnal motion is the fundamental one. Our analysis gives pride of place to De caelo II, 10, a chapter of Aristotle’s text which curiously allows an ‘Alpetragian’ reading of the transmission of motion.In Copernicus and the Copernicans, natural centrality is identified with the geometrical centre and, therefore, the Sun is acknowledged as the body through which the Deity acts on the world and it also plays the role of the principle and starting point of cosmic motion. This motion, however, is no longer diurnal motion, but the annual periodical motion of the planets. Within this context, we pose the question of to what extent it is possible to think that, before Kepler, there is a tacit attribution of a dynamic or motive role to the Sun by Copernicus, Rheticus, and Digges.For Bruno, since the Universe is infinite and homogeneous and the relationship of the Deity with it is one of indifferent presence everywhere, the Universe has no absolute centre, for any point is a centre. By the same token, there is no place that enjoys the prerogative of being—as being the seat of God—the motionless principle and starting point of motion.  相似文献   

3.
Croston's method is widely used to predict inventory demand when it is intermittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston's method and three related methods, and we show that any underlying model will be inconsistent with the properties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

4.
Carman argues, in ‘The electrons of the dinosaurs and the center of the Earth’, that we may have more reason to be realists about dinosaurs than about electrons, because there are plenty of observable analogues for dinosaurs but not for electrons. These observable analogues severely restrict the range of plausible ontologies, thus reducing the threat of underdetermination. In response to this argument, I show that the observable analogues for ancient organisms are a mixed epistemic blessing at best, and I discuss some cases from the history of paleontology in which the observable analogues—ducks, shrimp, and lizards—have led scientists into persisting error. I also give reasons for thinking that underdetermination will be just as serious a problem in historical as in experimental science. I conclude that Carman has not succeeded in showing that dinosaurs ‘come off better’ than electrons.  相似文献   

5.
In this paper I defend the classical computational account of reasoning against a range of highly influential objections, sometimes called relevance problems. Such problems are closely associated with the frame problem in artificial intelligence and, to a first approximation, concern the issue of how humans are able to determine which of a range of representations are relevant to the performance of a given cognitive task. Though many critics maintain that the nature and existence of such problems provide grounds for rejecting classical computationalism, I show that this is not so. Some of these putative problems are a cause for concern only on highly implausible assumptions about the extent of our cognitive capacities, whilst others are a cause for concern only on similarly implausible views about the commitments of classical computationalism. Finally, some versions of the relevance problem are not really objections but hard research issues that any satisfactory account of cognition needs to address. I conclude by considering the diagnostic issue of why accounts of cognition in general—and classical computational accounts, in particular—have faired so poorly in addressing such research issues.  相似文献   

6.
Between 1940 and 1945, while still a student of theoretical physics and without any contact with the history of science, Thomas S. Kuhn developed a general outline of a theory of the role of belief in science. This theory was well rooted in the philosophical tradition of Emerson Hall, Harvard, and particularly in H. M. Sheffer’s and C. I. Lewis’s logico-philosophical works—Kuhn was, actually, a graduate student of the former in 1945. In this paper I reconstruct the development of that general outline after Kuhn’s first years at Harvard. I examine his works on moral and aesthetic issues—where he displayed an already ‘anti-Whig’ stance concerning historiography—as well as his first ‘Humean’ approach to science and realism, where his earliest concern with belief is evident. Then I scrutinise his graduate work to show how his first account of the role of belief was developed. The main aim of this paper is to show that the history of science illustrated for Kuhn the epistemic role and effects of belief he had already been theorising about since around 1941.  相似文献   

7.
Experimental modeling is the construction of theoretical models hand in hand with experimental activity. As explained in Section 1, experimental modeling starts with claims about phenomena that use abstract concepts, concepts whose conditions of realization are not yet specified; and it ends with a concrete model of the phenomenon, a model that can be tested against data. This paper argues that this process from abstract concepts to concrete models involves judgments of relevance, which are irreducibly normative. In Section 2, we show, on the basis of several case studies, how these judgments contribute to the determination of the conditions of realization of the abstract concepts and, at the same time, of the quantities that characterize the phenomenon under study. Then, in Section 3, we compare this view on modeling with other approaches that also have acknowledged the role of relevance judgments in science. To conclude, in Section 4, we discuss the possibility of a plurality of relevance judgments and introduce a distinction between locally and generally relevant factors.  相似文献   

8.
No-conspiracy is the requirement that measurement settings should be probabilistically independent of the elements of reality responsible for the measurement outcomes. In this paper we investigate what role no-conspiracy generally plays in a physical theory; how it influences the semantical role of the event types of the theory; and how it relates to such other concepts as separability, compatibility, causality, locality and contextuality.  相似文献   

9.
In this paper, we propose a multivariate time series model for over‐dispersed discrete data to explore the market structure based on sales count dynamics. We first discuss the microstructure to show that over‐dispersion is inherent in the modeling of market structure based on sales count data. The model is built on the likelihood function induced by decomposing sales count response variables according to products' competitiveness and conditioning on their sum of variables, and it augments them to higher levels by using the Poisson–multinomial relationship in a hierarchical way, represented as a tree structure for the market definition. State space priors are applied to the structured likelihood to develop dynamic generalized linear models for discrete outcomes. For the over‐dispersion problem, gamma compound Poisson variables for product sales counts and Dirichlet compound multinomial variables for their shares are connected in a hierarchical fashion. Instead of the density function of compound distributions, we propose a data augmentation approach for more efficient posterior computations in terms of the generated augmented variables, particularly for generating forecasts and predictive density. We present the empirical application using weekly product sales time series in a store to compare the proposed models accommodating over‐dispersion with alternative no over‐dispersed models by several model selection criteria, including in‐sample fit, out‐of‐sample forecasting errors and information criterion. The empirical results show that the proposed modeling works well for the over‐dispersed models based on compound Poisson variables and they provide improved results compared with models with no consideration of over‐dispersion. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper I argue that the Strong Programme’s aim to provide robust explanations of belief acquisition is limited by its commitment to the symmetry principle. For Bloor and Barnes, the symmetry principle is intended to drive home the fact that epistemic norms are socially constituted. My argument here is that even if our epistemic standards are fully naturalized—even relativized—they nevertheless can play a pivotal role in why individuals adopt the beliefs that they do. Indeed, sometimes the fact that a belief is locally endorsed as rational is the only reason why an individual holds it. In this way, norms of rationality have a powerful and unique role in belief formation. But if this is true then the symmetry principle’s emphasis on ‘sameness of type’ is misguided. It has the undesirable effect of not just naturalizing our cognitive commitments, but trivializing them. Indeed, if the notion of ‘similarity’ is to have any content, then we are not going to classify as ‘the same’ beliefs that are formed in accordance with deeply entrenched epistemic norms as ones formed without reflection on these norms, or ones formed in spite of these norms. My suggestion here is that we give up the symmetry principle in favor of a more sophisticated principle, one that allows for a taxonomy of causes rich enough to allow us to delineate the unique impact epistemic norms have on those individuals who subscribe to them.  相似文献   

11.
One finds, in Maxwell's writings on thermodynamics and statistical physics, a conception of the nature of these subjects that differs in interesting ways from the way they are usually conceived. In particular, though—in agreement with the currently accepted view—Maxwell maintains that the second law of thermodynamics, as originally conceived, cannot be strictly true, the replacement he proposes is different from the version accepted by most physicists today. The modification of the second law accepted by most physicists is a probabilistic one: although statistical fluctuations will result in occasional spontaneous differences in temperature or pressure, there is no way to predictably and reliably harness these to produce large violations of the original version of the second law. Maxwell advocates a version of the second law that is strictly weaker; the validity of even this probabilistic version is of limited scope, limited to situations in which we are dealing with large numbers of molecules en masse and have no ability to manipulate individual molecules. Connected with this is his conception of the thermodynamic concepts of heat, work, and entropy; on the Maxwellian view, these are concept that must be relativized to the means we have available for gathering information about and manipulating physical systems. The Maxwellian view is one that deserves serious consideration in discussions of the foundation of statistical mechanics. It has relevance for the project of recovering thermodynamics from statistical mechanics because, in such a project, it matters which version of the second law we are trying to recover.  相似文献   

12.
In 2006, in a special issue of this journal, several authors explored what they called the dual nature of artefacts. The core idea is simple, but attractive: to make sense of an artefact, one needs to consider both its physical nature—its being a material object—and its intentional nature—its being an entity designed to further human ends and needs. The authors construe the intentional component quite narrowly, though: it just refers to the artefact’s function, its being a means to realize a certain practical end. Although such strong focus on functions is quite natural (and quite common in the analytic literature on artefacts), I argue in this paper that an artefact’s intentional nature is not exhausted by functional considerations. Many non-functional properties of artefacts—such as their marketability and ease of manufacture—testify to the intentions of their users/designers; and I show that if these sorts of considerations are included, one gets much more satisfactory explanations of artefacts, their design, and normativity.  相似文献   

13.
Philosophers of science have paid little attention, positive or negative, to Lyotard’s book The postmodern condition, even though it has been popular in other fields. We set out some of the reasons for this neglect. Lyotard thought that sciences could be justified by non-scientific narratives (a position he later abandoned). We show why this is unacceptable, and why many of Lyotard’s characterisations of science are either implausible or are narrowly positivist. One of Lyotard’s themes is that the nature of knowledge has changed and thereby so has society itself. However much of what Lyotard says muddles epistemological matters about the definition of ‘knowledge’ with sociological claims about how information circulates in modern society. We distinguish two kinds of legitimation of science: epistemic and socio-political. In proclaiming ‘incredulity towards metanarratives’ Lyotard has nothing to say about how epistemic and methodological principles are to be justified (legitimated). He also gives a bad argument as to why there can be no epistemic legitimation, which is based on an act/content confusion, and a confusion between making an agreement and the content of what is agreed to. As for socio-political legitimation, Lyotard’s discussion remains at the abstract level of science as a whole rather than at the level of the particular applications of sciences. Moreover his positive points can be accepted without taking on board any of his postmodernist account of science. Finally we argue that Lyotard’s account of paralogy, which is meant to provide a ‘postmodern’ style of justification, is a failure.  相似文献   

14.
15.
Ultraviolet radiation is generally considered to have been discovered by Johann Wilhelm Ritter in 1801. In this article, we study the reception of Ritter’s experiment during the first decade after the event—Ritter’s remaining lifetime. Drawing on the attributional model of discovery, we are interested in whether the German physicists and chemists granted Ritter’s observation the status of a discovery and, if so, of what. Two things are remarkable concerning the early reception, and both have to do more with neglect than with (positive) reception. Firstly, Ritter’s observation was sometimes accepted as a fact but, with the exception of C. J. B. Karsten’s theory of invisible light, it played almost no role in the lively debate about the nature of heat and light. We argue that it was the prevalent discourse based on the metaphysics of Stoffe that prevented a broader reception of Ritter’s invisible rays, not the fact that Ritter himself made his findings a part of his Naturphilosophie. Secondly, with the exception of C. E. Wünsch’s experiments on the visual spectrum, there was no experimental examination of the experiment. We argue that theorizing about ontological systems was more common than experimenting, because, given its social and institutional situation, this was the appropriate way of contributing to physics. Consequently, it was less clear in 1810 than in 1801 what, if anything, had been discovered by Ritter.  相似文献   

16.
Modern scientific knowledge is increasingly collaborative. Much analysis in social epistemology models scientists as self-interested agents motivated by external inducements and sanctions. However, less research exists on the epistemic import of scientists’ moral concern for their colleagues. I argue that scientists’ trust in their colleagues’ moral motivations is a key component of the rationality of collaboration. On the prevailing account, trust is a matter of mere reliance on the self-interest of one’s colleagues. That is, scientists merely rely on external compulsion to motivate self-interested colleagues to be trustworthy collaborators. I show that this self-interest account has significant limitations. First, it cannot fully account for trust by relatively powerless scientists. Second, reliance on self-interest can be self-defeating. For each limitation, I show that moral trust can bridge the gap—when members of the scientific community cannot rely on the self-interest of their colleagues, they rationally place trust in the moral motivations of their colleagues. Case studies of mid-twentieth-century industrial laboratories and exploitation of junior scientists show that such moral trust justifies collaboration when mere reliance on the self-interest of colleagues would be irrational. Thus, this paper provides a more complete and realistic account of the rationality of scientific collaboration.  相似文献   

17.
I propose a new perspective with which to understand scientific revolutions. This is a conversion from an object-only perspective to one that properly treats object and process concepts as distinct kinds. I begin with a re-examination of the Copernican revolution. Recent findings from the history of astronomy suggest that the Copernican revolution was a move from a conceptual framework built around an object concept to one built around a process concept. Drawing from studies in the cognitive sciences, I then show that process concepts are independent of object concepts, grounded in specific regions of the brain and involving unique representational mechanisms. There are cognitive obstacles to the transformation from object to process concepts, and an object bias—a tendency to treat processes as objects—makes this kind of conceptual change difficult. Consequently, transformation from object to process concepts is disruptive and revolutionary. Finally, I explore the implications of this new perspective on scientific revolutions for both the history and philosophy of science.  相似文献   

18.
Rheological properties of living cells determine how cells interact with their mechanical microenvironment and influence their physiological functions. Numerous experimental studies have show that mechanical contractile stress borne by the cytoskeleton and weak power-law viscoelasticity are governing principles of cell rheology, and that the controlling physics is at the level of integrative cytoskeletal lattice properties. Based on these observations, two concepts have emerged as leading models of cytoskeletal mechanics. One is the tensegrity model, which explains the role of the contractile stress in cytoskeletal mechanics, and the other is the soft glass rheology model, which explains the weak power-law viscoelasticity of cells. While these two models are conceptually disparate, the phenomena that they describe are often closely associated in living cells for reasons that are largely unknown. In this review, we discuss current understanding of cell rheology by emphasizing the underlying biophysical mechanism and critically evaluating the existing rheological models. Received 25 May 2008; received after revision 19 June 2008; accepted 1 July 2008  相似文献   

19.
In climate science, climate models are one of the main tools for understanding phenomena. Here, we develop a framework to assess the fitness of a climate model for providing understanding. The framework is based on three dimensions: representational accuracy, representational depth, and graspability. We show that this framework does justice to the intuition that classical process-based climate models give understanding of phenomena. While simple climate models are characterized by a larger graspability, state-of-the-art models have a higher representational accuracy and representational depth. We then compare the fitness-for-providing understanding of process-based to data-driven models that are built with machine learning. We show that at first glance, data-driven models seem either unnecessary or inadequate for understanding. However, a case study from atmospheric research demonstrates that this is a false dilemma. Data-driven models can be useful tools for understanding, specifically for phenomena for which scientists can argue from the coherence of the models with background knowledge to their representational accuracy and for which the model complexity can be reduced such that they are graspable to a satisfactory extent.  相似文献   

20.
How a conformationally disordered polypeptide chain rapidly and efficiently achieves its well-defined native structure is still a major question in modern structural biology. Although much progress has been made towards rationalizing the principles of protein structure and dynamics, the mechanism of the folding process and the determinants of the final fold are not yet known in any detail. One protein for which folding has been studied in great detail by a combination of diverse techniques is hen lysozyme. In this article we review the present state of our knowledge of the folding process of this enzyme and focus in particular on recent experiments to probe some of its specific features. These results are then discussed in the context of the ‘new view’ of protein folding based on energy surfaces and land scapes. It is shown that a schematic energy surface for lysozyme folding, which is broadly consistent with our experimental data, begins to provide a unified model for protein folding through which experimental and theoretical ideas can be brought together.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号