首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 937 毫秒
1.
The symmetries of a physical theory are often associated with two things: conservation laws (via e.g. Noether׳s and Schur׳s theorems) and representational redundancies (“gauge symmetry”). But how can a physical theory׳s symmetries give rise to interesting (in the sense of non-trivial) conservation laws, if symmetries are transformations that correspond to no genuine physical difference? In this paper, I argue for a disambiguation in the notion of symmetry. The central distinction is between what I call “analytic” and “synthetic“ symmetries, so called because of an analogy with analytic and synthetic propositions. “Analytic“ symmetries are the turning of idle wheels in a theory׳s formalism, and correspond to no physical change; “synthetic“ symmetries cover all the rest. I argue that analytic symmetries are distinguished because they act as fixed points or constraints in any interpretation of a theory, and as such are akin to Poincaré׳s conventions or Reichenbach׳s ‘axioms of co-ordination’, or ‘relativized constitutive a priori principles’.  相似文献   

2.
3.
In his book, The Material Theory of Induction, Norton argues that the quest for a universal formal theory or ‘schema’ for analogical inference should be abandoned. In its place, he offers the “material theory of analogy”: each analogical inference is “powered” by a local fact of analogy rather than by any formal schema. His minimalist model promises a straightforward, fact-based approach to the evaluation and justification of analogical inferences. This paper argues that although the rejection of universal schemas is justified, Norton's positive theory is limited in scope: it works well only for a restricted class of analogical inferences. Both facts and quasi-formal criteria have roles to play in a theory of analogical reasoning.  相似文献   

4.
The term “analogy” stands for a variety of methodological practices all related in one way or another to the idea of proportionality. We claim that in his first substantial contribution to electromagnetism James Clerk Maxwell developed a methodology of analogy which was completely new at the time or, to borrow John North’s expression, Maxwell’s methodology was a “newly contrived analogue”. In his initial response to Michael Faraday’s experimental researches in electromagnetism, Maxwell did not seek an analogy with some physical system in a domain different from electromagnetism as advocated by William Thomson; rather, he constructed an entirely artificial one to suit his needs. Following North, we claim that the modification which Maxwell introduced to the methodology of analogy has not been properly appreciated. In view of our examination of the evidence, we argue that Maxwell gave a new meaning to analogy; in fact, it comes close to modeling in current usage.  相似文献   

5.
“Teleosemantic” or “biosemantic” theories form a strong naturalistic programme in the philosophy of mind and language. They seek to explain the nature of mind and language by recourse to a natural history of “proper functions” as selected-for effects of language- and thought-producing mechanisms. However, they remain vague with respect to the nature of the proposed analogy between selected-for effects on the biological level and phenomena that are not strictly biological, such as reproducible linguistic and cultural forms. This essay critically explores various interpretations of this analogy. It suggests that these interpretations can be explicated by contrasting adaptationist with pluralist readings of the evolutionary concept of adaptation. Among the possible interpretations of the relations between biological adaptations and their analogues in language and culture, the two most relevant are a linear, hierarchical, signalling-based model that takes its cues from the evolution of co-operation and joint intentionality and a mutualistic, pluralist model that takes its cues from mimesis and symbolism in the evolution of human communication. Arguing for the merits of the mutualistic model, the present analysis indicates a path towards an evolutionary pluralist version of biosemantics that will align with theories of cognition as being environmentally “scaffolded”. Language and other cultural forms are partly independent reproducible structures that acquire proper functions of their own while being integrated with organism-based cognitive traits in co-evolutionary fashion.  相似文献   

6.
In this paper I address Descartes’ use of analogy in physics. First, I introduce Descartes’ hypothetical reasoning, distinguishing between analogy and hypothesis. Second, I examine in detail Descartes’ use of analogy to both discover causes and add plausibility to his hypotheses—even though not always explicitly stated, Descartes’ practice assumes a unified view of the subject matter of physics as the extension of bodies in terms of their size, shape and the motion of their parts. Third, I present Descartes’ unique “philosophy of analogy”, where the absence of analogy serves as a criterion for falsifying proposed explanations in physics. I conclude by defending Descartes’ philosophy of analogy by appeal to the value scientists assign to simplicity in their explanations.  相似文献   

7.
This article examines the problem of the origins of the correspondence principle formulated by Bohr in 1920 and intends to test the correctness of the argument that the essential elements of that principle were already present in the 1913 “trilogy”. In contrast to this point of view, moreover widely shared in the literature, this article argues that it is possible to find a connection between the formulation of the correspondence principle and the assessment that led Bohr to abandon the search for a Planck-type theory. In fact, a thorough examination of Bohr’s works shows that the birth of this principle coincided with the depletion of a research program whose origins may date back to Bohr’s stay at the Rutherford’s laboratory (summer 1912). Finally, this article argues that original program of research was abandoned when it became clear that Planck’s quantum hypothesis for the harmonic oscillator was not an adequate support for the theoretical architecture of atomic physics; namely, there was evidence enough to justify a most drastic conclusion, according to Bohr: “I do not think that a theory of the Planck type can be made logical consistent”.  相似文献   

8.
Recent insights into the conceptual structure of localization in QFT (modular localization) led to clarifications of old unsolved problems. The oldest one is the Einstein–Jordan conundrum which led Jordan in 1925 to the discovery of quantum field theory. This comparison of fluctuations in subsystems of heat bath systems (Einstein) with those resulting from the restriction of the QFT vacuum state to an open subvolume (Jordan) leads to a perfect analogy; the globally pure vacuum state becomes upon local restriction a strongly impure KMS state. This phenomenon of localization-caused thermal behavior as well as the vacuum-polarization clouds at the causal boundary of the localization region places localization in QFT into a sharp contrast with quantum mechanics and justifies the attribute “holstic”. In fact it positions the E–J Gedankenexperiment into the same conceptual category as the cosmological constant problem and the Unruh Gedankenexperiment. The holistic structure of QFT resulting from “modular localization” also leads to a revision of the conceptual origin of the crucial crossing property which entered particle theory at the time of the bootstrap S-matrix approach but suffered from incorrect use in the S-matrix settings of the dual model and string theory.The new holistic point of view, which strengthens the autonomous aspect of QFT, also comes with new messages for gauge theory by exposing the clash between Hilbert space structure and localization and presenting alternative solutions based on the use of stringlocal fields in Hilbert space. Among other things this leads to a reformulation of the Englert–Higgs symmetry breaking mechanism.  相似文献   

9.
In 1981, David Hubel and Torsten Wiesel received the Nobel Prize for their research on cortical columns—vertical bands of neurons with similar functional properties. This success led to the view that “cortical column” refers to the basic building block of the mammalian neocortex. Since the 1990s, however, critics questioned this building block picture of “cortical column” and debated whether this concept is useless and should be replaced with successor concepts. This paper inquires which experimental results after 1981 challenged the building block picture and whether these challenges warrant the elimination “cortical column” from neuroscientific discourse. I argue that the proliferation of experimental techniques led to a patchwork of locally adapted uses of the column concept. Each use refers to a different kind of cortical structure, rather than a neocortical building block. Once we acknowledge this diverse-kinds picture of “cortical column”, the elimination of column concept becomes unnecessary. Rather, I suggest that “cortical column” has reached conceptual retirement: although it cannot be used to identify a neocortical building block, column research is still useful as a guide and cautionary tale for ongoing research. At the same time, neuroscientists should search for alternative concepts when studying the functional architecture of the neocortex. keywords: Cortical column, conceptual development, history of neuroscience, patchwork, eliminativism, conceptual retirement.  相似文献   

10.
This essay utilizes the concept “exploratory experimentation” as a probe into the relation between historiography and philosophy of science. The essay traces the emergence of the historiographical concept “exploratory experimentation” in the late 1990s. The reconstruction of the early discussions about exploratory experimentation shows that the introduction of the concept had unintended consequences: Initially designed to debunk philosophical ideas about theory testing, the concept “exploratory experimentation” quickly exposed the poverty of our conceptual tools for the analysis of experimental practice. Looking back at a number of detailed analyses of experimental research, we can now appreciate that the concept of exploratory experimentation is too vague and too elusive to fill the desideratum whose existence it revealed.  相似文献   

11.
In 1918, H. Weyl proposed a unified theory of gravity and electromagnetism based on a generalization of Riemannian geometry. In spite of its elegance and beauty, a serious objection was raised by Einstein, who argued that Weyl’s theory was not suitable as a physical theory . According to Einstein, the theory led to the prediction of a “second clock effect”, which has not been observed by experiments as yet. We briefly revisit this point and argue that a preliminary discussion on the very notion of proper time is needed in order to consider Einstein’s critical point of view. We also point out that Weyl theory is basically incomplete in its original version and its completion may lead to a rich and interesting new approach to gravity.  相似文献   

12.
In the second half of the eighteenth century a lively debate was going on in Germany about the nature of light. One important contribution to this discussion, namely a paper by Nicolas Béguelin, is studied in this article. In his essay, Béguelin compared the Newtonian emission theory of light and the wave theory of Leonhard Euler. Whereas others opted for one of the two theories by invoking arguments or authorities, Béguelin made a systematic search for experiments which he hoped would settle the dispute. Two of these experiments were most original. The first, which Béguelin himself performed, concerned light rays grazing a glass surface. For several reasons it did not have the impact it deserved. The second one was a thought experiment which was meant to illustrate a major tenet of the wave theory, that is, the analogy between light and sound. An analysis is given of these two experiments, and it is shown that neither of them brought the debate to an end.  相似文献   

13.
Following demands to regulate biomedicine in the post-war period, Sweden saw several political debates about research ethics in the 1970s. Many of the debates centered on fetal research and animal experiments. At stake were questions of moral permissibility, public transparency, and scientific freedom. However, these debates did not only reveal ethical disagreement—they also contributed to constructing new boundaries between life-forms. Taking a post-Marxist approach to discursive policy analysis, we argue that the meaning of both the “human” and the “animal” in these debates was shaped by a need to manage a legitimacy crisis for medical science. By analyzing Swedish government bills, motions, parliamentary debates, and committee memorials from the 1970s, we map out how fetal and animal research were constituted as policy problems. We place particular emphasis on the problematization of fetal and animal vulnerability. By comparing the debates, we trace out how a particular vision of the ideal life defined the human-animal distinction.  相似文献   

14.
This paper describes a long-standing, though little known, debate between Dirac and Heisenberg over the nature of scientific methodology, theory change, and intertheoretic relations. Following Heisenberg's terminology, their disagreements can be summarized as a debate over whether the classical and quantum theories are “open” or “closed.” A close examination of this debate sheds new light on the philosophical views of two of the great founders of quantum theory.  相似文献   

15.
Inferentialists about scientific representation hold that an apparatus's representing a target system consists in the apparatus allowing “surrogative inferences” about the target. I argue that a serious problem for inferentialism arises from the fact that many scientific theories and models contain internal inconsistencies. Inferentialism, left unamended, implies that inconsistent scientific models have unlimited representational power, since an inconsistency permits any conclusion to be inferred. I consider a number of ways that inferentialists can respond to this challenge before suggesting my own solution. I develop an analogy to exploitable glitches in a game. Even though inconsistent representational apparatuses may in some sense allow for contradictions to be generated within them, doing so violates the intended function of the apparatus's parts and hence violates representational “gameplay”.  相似文献   

16.
In this paper I assess whether the recently proposed “No De-Coupling” (NDC) theory of constitutive relevance in mechanisms is a useful tool to reconstruct constitutive relevance investigations in scientific practice. The NDC theory has been advanced as a framework theoretically superior to the mutual manipulability (MM) account of constitutive relevance in mechanisms but, in contrast to the MM account, has not yet been applied to detailed case studies. I argue that the NDC account is also applicable to empirical practice and that it fares better than the MM account on both theoretical and empirical grounds. I elaborate these claims in terms of applications of the NDC theory to two case studies of cognitive science research on the role of eye movements in mechanisms for cognitive capacities.  相似文献   

17.
I take the phrase “the theory of nonlinear oscillations” to identify a historical phenomenon. Under this heading a powerful school in Soviet science, L. I. Mandelstam's school, developed its version of what was later called “nonlinear dynamics”. The theory of nonlinear oscillations was formed around the concept of self-oscillations, which was elaborated by Mandelstam's graduate student A. A. Andronov. This concept determined the paradigm of the theory of nonlinear oscillations as well as its ideology, that is, a set of characteristic ideas which, together with the corresponding examples and analogues, allowed the expansion of the theory into associated areas where it indicated new interesting phenomena and posed new problems. It was the ideology that made possible the broader application of the theory of nonlinear oscillations, whose domain was originally lumped systems, to continuous media and its subsequent progress toward synergetics. In the course of its ideological application, the concept of self-oscillations was greatly extended, became vague and diffuse, and related concepts such as self-waves and self-structures appeared.  相似文献   

18.
This paper analyses the practice of model-building “beyond the Standard Model” in contemporary high-energy physics and argues that its epistemic function can be grasped by regarding models as mediating between the phenomenology of the Standard Model and a number of “theoretical cores” of hybrid character, in which mathematical structures are combined with verbal narratives (“stories”) and analogies referring back to empirical results in other fields (“empirical references”). Borrowing a metaphor from a physics research paper, model-building is likened to the search for a Rosetta stone, whose significance does not lie in its immediate content, but rather in the chance it offers to glimpse at and manipulate the components of hybrid theoretical constructs. I shall argue that the rise of hybrid theoretical constructs was prompted by the increasing use of nonrigorous mathematical heuristics in high-energy physics. Support for my theses will be offered in form of a historical–philosophical analysis of the emergence and development of the theoretical core centring on the notion that the Higgs boson is a composite particle. I will follow the heterogeneous elements which would eventually come to form this core from their individual emergence in the 1960s and 1970s, through their collective life as a theoretical core from 1979 until the present day.  相似文献   

19.
It is widely acknowledged that the patient's perspective should be considered when making decisions about how her care will be managed. Patient participation in the decision making process may play an important role in bringing to light and incorporating her perspective. The GRADE framework is touted as an evidence-based process for determining recommendations for clinical practice; i.e. determining how care ought to be managed. GRADE recommendations are categorized as “strong” or “weak” based on several factors, including the “values and preferences” of a “typical” patient. The strength of the recommendation also provides instruction to the clinician about when and how patients should participate in the clinical encounter, and thus whether an individual patient's values and preferences will be heard in her clinical encounter. That is, a “strong” recommendation encourages “paternalism” and a “weak” recommendation encourages shared decision making. We argue that adoption of the GRADE framework is problematic to patient participation and may result in care that is not respectful of the individual patient's values and preferences. We argue that the root of the problem is the conception of “values and preferences” in GRADE – the framework favours population thinking (e.g. “typical” patient “values and preferences”), despite the fact that “values and preferences” are individual in the sense that they are deeply personal. We also show that tying the strength of a recommendation to a model of decision making (paternalism or shared decision making) constrains patient participation and is not justified (theoretically and/or empirically) in the GRADE literature.  相似文献   

20.
In the years 1878 and 1879 the American physicist Alfred Marshall Mayer (1836–1897) published his experiments with floating magnets as a didactic illustration of molecular actions and forms. A number of physicists made use of this analogy of molecular structure. For William Thomson they were a mechanical illustration of the kinetic equilibrium of groups of columnar vortices revolving in circles round their common centre of gravity (1878). A number of modifications of Mayer's experiments were described, which gave configurations which were more or less analogous to Mayer's arrangements. It was Joseph John Thomson who, in publications between 1897 and 1907, used Mayer's results to obtain a good deal of insight into the general laws which govern the configuration of the electrons in his atomic model. This article is mainly concerned with Mayer's experiments with floating magnets and their use by a number of physicists. Through his experiments Mayer made a significant, although small, contribution to the theory of atomic structure.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号