首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
What if gravity satisfied the Klein–Gordon equation? Both particle physics from the 1920–30s and the 1890s Neumann–Seeliger modification of Newtonian gravity with exponential decay suggest considering a “graviton mass term” for gravity, which is algebraic in the potential. Unlike Nordström׳s “massless” theory, massive scalar gravity is strictly special relativistic in the sense of being invariant under the Poincaré group but not the 15-parameter Bateman–Cunningham conformal group. It therefore exhibits the whole of Minkowski space–time structure, albeit only indirectly concerning volumes. Massive scalar gravity is plausible in terms of relativistic field theory, while violating most interesting versions of Einstein׳s principles of general covariance, general relativity, equivalence, and Mach. Geometry is a poor guide to understanding massive scalar gravity(s): matter sees a conformally flat metric due to universal coupling, but gravity also sees the rest of the flat metric (barely or on long distances) in the mass term. What is the ‘true’ geometry, one might wonder, in line with Poincaré׳s modal conventionality argument? Infinitely many theories exhibit this bimetric ‘geometry,’ all with the total stress–energy׳s trace as source; thus geometry does not explain the field equations. The irrelevance of the Ehlers–Pirani–Schild construction to a critique of conventionalism becomes evident when multi-geometry theories are contemplated. Much as Seeliger envisaged, the smooth massless limit indicates underdetermination of theories by data between massless and massive scalar gravities—indeed an unconceived alternative. At least one version easily could have been developed before General Relativity; it then would have motivated thinking of Einstein׳s equations along the lines of Einstein׳s newly re-appreciated “physical strategy” and particle physics and would have suggested a rivalry from massive spin 2 variants of General Relativity (massless spin 2, Pauli and Fierz found in 1939). The Putnam–Grünbaum debate on conventionality is revisited with an emphasis on the broad modal scope of conventionalist views. Massive scalar gravity thus contributes to a historically plausible rational reconstruction of much of 20th–21st century space–time philosophy in the light of particle physics. An appendix reconsiders the Malament–Weatherall–Manchak conformal restriction of conventionality and constructs the ‘universal force’ influencing the causal structure.Subsequent works will discuss how massive gravity could have provided a template for a more Kant-friendly space–time theory that would have blocked Moritz Schlick׳s supposed refutation of synthetic a priori knowledge, and how Einstein׳s false analogy between the Neumann–Seeliger–Einstein modification of Newtonian gravity and the cosmological constant Λ generated lasting confusion that obscured massive gravity as a conceptual possibility.  相似文献   

2.
In the Bayesian approach to quantum mechanics, probabilities—and thus quantum states—represent an agent's degrees of belief, rather than corresponding to objective properties of physical systems. In this paper we investigate the concept of certainty in quantum mechanics. Particularly, we show how the probability-1 predictions derived from pure quantum states highlight a fundamental difference between our Bayesian approach, on the one hand, and Copenhagen and similar interpretations on the other. We first review the main arguments for the general claim that probabilities always represent degrees of belief. We then argue that a quantum state prepared by some physical device always depends on an agent's prior beliefs, implying that the probability-1 predictions derived from that state also depend on the agent's prior beliefs. Quantum certainty is therefore always some agent's certainty. Conversely, if facts about an experimental setup could imply agent-independent certainty for a measurement outcome, as in many Copenhagen-like interpretations, that outcome would effectively correspond to a preexisting system property. The idea that measurement outcomes occurring with certainty correspond to preexisting system properties is, however, in conflict with locality. We emphasize this by giving a version of an argument of Stairs [(1983). Quantum logic, realism, and value-definiteness. Philosophy of Science, 50, 578], which applies the Kochen–Specker theorem to an entangled bipartite system.  相似文献   

3.
The application of analytic continuation in quantum field theory (QFT) is juxtaposed to T-duality and mirror symmetry in string theory. Analytic continuation—a mathematical transformation that takes the time variable t to negative imaginary time—it—was initially used as a mathematical technique for solving perturbative Feynman diagrams, and was subsequently the basis for the Euclidean approaches within mainstream QFT (e.g., Wilsonian renormalization group methods, lattice gauge theories) and the Euclidean field theory program for rigorously constructing non-perturbative models of interacting QFTs. A crucial difference between theories related by duality transformations and those related by analytic continuation is that the former are judged to be physically equivalent while the latter are regarded as physically inequivalent. There are other similarities between the two cases that make comparing and contrasting them a useful exercise for clarifying the type of argument that is needed to support the conclusion that dual theories are physically equivalent. In particular, T-duality and analytic continuation in QFT share the criterion for predictive equivalence that two theories agree on the complete set of expectation values and the mass spectra and the criterion for formal equivalence that there is a “translation manual” between the physically significant algebras of observables and sets of states in the two theories. The analytic continuation case study illustrates how predictive and formal equivalence are compatible with physical inequivalence, but not in the manner of standard underdetermination cases. Arguments for the physical equivalence of dual theories must cite considerations beyond predictive and formal equivalence. The analytic continuation case study is an instance of the strategy of developing a physical theory by extending the formal or mathematical equivalence with another physical theory as far as possible. That this strategy has resulted in developments in pure mathematics as well as theoretical physics is another feature that this case study has in common with dualities in string theory.  相似文献   

4.
Except for a few brief periods, Einstein was uninterested in analysing the nature of the spacetime singularities that appeared in solutions to his gravitational field equations for general relativity. The existence of such monstrosities reinforced his conviction that general relativity was an incomplete theory which would be superseded by a singularity-free unified field theory. Nevertheless, on a number of occasions between 1916 and the end of his life, Einstein was forced to confront singularities. His reactions show a strange asymmetry: he tended to be more disturbed by (what today we would call) merely apparent singularities and less disturbed by (what we would call) real singularities. Einstein had strong a priori ideas about what results a correct physical theory should deliver. In the process of searching through theoretical possibilities, he tended to push aside technical problems and jump over essential difficulties. Sometimes this method of working produced brilliant new ideas—such as the Einstein–Rosen bridge—and sometimes it lead him to miss important implications of his theory of gravity—such as gravitational collapse.  相似文献   

5.
In this paper I consider the structures that chemists and physicists attribute at the molecular scale to substances and materials of various kinds, and how they relate to structures and processes at other scales. I argue that the structure of a substance is the set of properties and relations which are preserved across all the conditions in which it can be said to exist. In short, structure is abstraction. On the basis of this view, and using concrete examples, I argue that structures, and therefore the chemical substances and other materials to which they are essential, are emergent. Firstly, structures themselves are scale-dependent because they can only exist within certain physical conditions, and a single substance may have different structures at different scales (of length, time and energy). Secondly, the distinctness of both substances and structures is a scale-dependent relationship: above a certain point, two distinct possibilities may become one. Thirdly, the necessary conditions for composition, for both substances and molecular species, are scale-dependent. To know whether a group of nuclei and electrons form a molecule it is not enough to consider energy alone: one also has to know about their environment and the lifetime over which the group robustly hangs together.  相似文献   

6.
David Albert claims that classical electromagnetic theory is not time reversal invariant. He acknowledges that all physics books say that it is, but claims they are “simply wrong” because they rely on an incorrect account of how the time reversal operator acts on magnetic fields. On that account, electric fields are left intact by the operator, but magnetic fields are inverted. Albert sees no reason for the asymmetric treatment, and insists that neither field should be inverted. I argue, to the contrary, that the inversion of magnetic fields makes good sense and is, in fact, forced by elementary geometric considerations. I also suggest a way of thinking about the time reversal invariance of classical electromagnetic theory—one that makes use of the invariant four-dimensional formulation of the theory—that makes no reference to magnetic fields at all. It is my hope that it will be of interest in its own right, Albert aside. It has the advantage that it allows for arbitrary curvature in the background spacetime structure, and is therefore suitable for the framework of general relativity. The only assumption one needs is temporal orientability.  相似文献   

7.
Constitutive mechanistic explanations are said to refer to mechanisms that constitute the phenomenon-to-be-explained. The most prominent approach of how to understand this relation is Carl Craver's mutual manipulability approach (MM) to constitutive relevance. Recently, MM has come under attack (Baumgartner and Casini 2017; Baumgartner and Gebharter 2015; Harinen 2014; Kästner 2017; Leuridan 2012; Romero 2015). It is argued that MM is inconsistent because, roughly, it is spelled out in terms of interventionism (which is an approach to causation), whereas constitutive relevance is said to be a non-causal relation. In this paper, I will discuss a strategy of how to resolve this inconsistency—so-called fat-handedness approaches (Baumgartner and Casini 2017; Baumgartner and Gebharter 2015; Romero 2015). I will argue that these approaches are problematic. I will present a novel suggestion for how to consistently define constitutive relevance in terms of interventionism. My approach is based on a causal interpretation of manipulability in terms of causal relations between the mechanism's components and what I will call temporal EIO-parts of the phenomenon. Still, this interpretation accounts for the fundamental difference between constitutive relevance and causal relevance.  相似文献   

8.
9.
Entanglement has long been the subject of discussion by philosophers of quantum theory, and has recently come to play an essential role for physicists in their development of quantum information theory. In this paper we show how the formalism of algebraic quantum field theory (AQFT) provides a rigorous framework within which to analyse entanglement in the context of a fully relativistic formulation of quantum theory. What emerges from the analysis are new practical and theoretical limitations on an experimenter's ability to perform operations on a field in one spacetime region that can disentangle its state from the state of the field in other spacelike-separated regions. These limitations show just how deeply entrenched entanglement is in relativistic quantum field theory, and yield a fresh perspective on the ways in which the theory differs conceptually from both standard non-relativistic quantum theory and classical relativistic field theory.  相似文献   

10.
We distinguish two orientations in Weyl's analysis of the fundamental role played by the notion of symmetry in physics, namely an orientation inspired by Klein's Erlangen program and a phenomenological-transcendental orientation. By privileging the former to the detriment of the latter, we sketch a group(oid)-theoretical program—that we call the Klein-Weyl program—for the interpretation of both gauge theories and quantum mechanics in a single conceptual framework. This program is based on Weyl's notion of a “structure-endowed entity” equipped with a “group of automorphisms”. First, we analyze what Weyl calls the “problem of relativity” in the frameworks provided by special relativity, general relativity, and Yang-Mills theories. We argue that both general relativity and Yang-Mills theories can be understood in terms of a localization of Klein's Erlangen program: while the latter describes the group-theoretical automorphisms of a single structure (such as homogenous geometries), local gauge symmetries and the corresponding gauge fields (Ehresmann connections) can be naturally understood in terms of the groupoid-theoretical isomorphisms in a family of identical structures. Second, we argue that quantum mechanics can be understood in terms of a linearization of Klein's Erlangen program. This stance leads us to an interpretation of the fact that quantum numbers are “indices characterizing representations of groups” ((Weyl, 1931a), p.xxi) in terms of a correspondence between the ontological categories of identity and determinateness.  相似文献   

11.
In his biography of Emil Artin, Richard Brauer describes the years from 1931–1941 as a time when “Artin spoke through his students and through the members of his mathematical circle” rather than through written publications. This paper explores these seemingly quiet years when Artin immigrated to America and disseminated ideas about algebraic number theory during this time in his collaboration with George Whaples, a young American mathematician who had just completed his Ph.D. at the University of Wisconsin. The main result of their work is the use of the product formula for valuations to come up with an axiomatic characterization of both algebraic number fields and algebraic function fields with a finite field of constants. These two families of fields are exactly the fields for which class field theory is known to hold. We situate their mathematical work in the broader context of algebraic number theory and their lives within the broader historical context.  相似文献   

12.
Originally, the expression “Teichmüller theory” referred to the theory that Oswald Teichmüller developed on deformations and on moduli spaces of marked Riemann surfaces. This theory is not an isolated field in mathematics. At different stages of its development, it received strong impetuses from analysis, geometry, and algebraic topology, and it had a major impact on other fields, including low-dimensional topology, algebraic topology, hyperbolic geometry, geometric group theory, representations of discrete groups in Lie groups, symplectic geometry, topological quantum field theory, theoretical physics, and there are certainly others. Of course, the impacts on these various fields are not equally important, but in some cases (namely, low-dimensional topology, algebraic geometry, and physics) the impact was crucial. At the same time, Teichmüller theory established important connections between the fields mentioned. This, in part, is a consequence of the diversity and the richness of the structure that Teichmüller space itself carries. From a more subjective point of view, the result of pondering on these connections and applications demonstrates the unity of mathematics. The aim of this paper is to survey the origin of Teichmüller theory and the development of its early major ideas.  相似文献   

13.
Thomas Kuhn suggested that symbolic generalizations are applied to concrete systems by a process involving exemplars and analogical reasoning. Using the related concepts of theoretical and formal templates, I argue that the process of applying templates can in some cases be made explicit and that we do not need to rely on similarity relations and tacit knowledge. In so doing I show how some formal models can be transferred from one scientific field to another. Examples include scale-free networks, the Lotka-Volterra model from biology, and the Goodwin model in economics. I also argue that this explicit approach has advantages over the more psychologically oriented approach of Kuhn and explain the sense in which templates do and do not produce unification.  相似文献   

14.
Most of our knowledge of Greek and Roman scientific practice and its place in ancient culture is derived from our study of ancient texts. In the last few decades, this written evidence—ancient technical or specialist literature—has begun to be studied using tools of literary analysis to help answer questions about, for instance, how these works were composed, their authors’ intentions and the expectations of their readers.This introduction to Structures and strategies in ancient Greek and Roman technical writing provides an overview of recent scholarship in the area, and the difficulty in pinning down what ‘technical/specialist literature’ might mean in an ancient context, since Greek and Roman authors communicated scientific knowledge using a wide variety of styles and forms of text (e.g. poetry, dialogues, letters).An outline of the three sections is provided: Form as a mirror of method, in which Sabine Föllinger and Alexander Mueller explore ways in which the structures of texts by Aristotle and Plutarch may reflect methodological concerns; Authors and their implied readers, with contributions by Oliver Stoll, David Creese, Boris Dunsch and Paula Olmos, which examines what ancient texts can tell us about the place of technical knowledge in antiquity; Science and the uses of poetry, with articles by Jochen Althoff, Michael Coxhead and Laurence Totelin, and a new English translation of the Aetna poem by Harry Hine, which explores the (to us) unexpected roles of poetry in ancient scientific culture.  相似文献   

15.
16.
What have recently been dubbed two ‘miracles’ of general relativity—(1) that all non-gravitational interactions are locally governed by Poincaré invariant dynamical laws; and (2) that, in the regime of experimental practice in which curvature effects may be ignored, the local Poincaré symmetries of the dynamical laws governing matter fields coincide with the local Poincaré symmetries of the dynamical metric field—remain unaccounted for in that theory. In this paper, I demonstrate that these two ‘miracles’ admit of a natural explanation in one particular successor theory to general relativity—namely, perturbative string theory. I argue that this point has important implications when considering both the ‘chronogeometricity’ (that is, the object in question being surveyed by rods and clocks built from matter fields) and spatiotemporal status of the dynamical metric field in both general relativity and perturbative string theory.  相似文献   

17.
In the area of social science, in particular, although we have developed methods for reliably discovering the existence of causal relationships, we are not very good at using these to design effective social policy. Cartwright argues that in order to improve our ability to use causal relationships, it is essential to develop a theory of causation that makes explicit the connections between the nature of causation, our best methods for discovering causal relationships, and the uses to which these are put. I argue that Woodward's interventionist theory of causation is uniquely suited to meet Cartwright's challenge. More specifically, interventionist mechanisms can provide the bridge from ‘hunting causes’ to ‘using them’, if interventionists (i) tell us more about the nature of these mechanisms, and (ii) endorse the claim that it is these mechanisms—or whatever constitutes them—that make causal claims true. I illustrate how having an understanding of interventionist mechanisms can allow us to put causal knowledge to use via a detailed example from organic chemistry.  相似文献   

18.
In contrast with some recent theories of infinitesimals as non-Archimedean entities, Leibniz’s mature interpretation was fully in accord with the Archimedean Axiom: infinitesimals are fictions, whose treatment as entities incomparably smaller than finite quantities is justifiable wholly in terms of variable finite quantities that can be taken as small as desired, i.e. syncategorematically. In this paper I explain this syncategorematic interpretation, and how Leibniz used it to justify the calculus. I then compare it with the approach of Smooth Infinitesimal Analysis, as propounded by John Bell. I find some salient differences, especially with regard to higher-order infinitesimals. I illustrate these differences by a consideration of how each approach might be applied to propositions of Newton’s Principia concerning the derivation of force laws for bodies orbiting in a circle and an ellipse. “If the Leibnizian calculus needs a rehabilitation because of too severe treatment by historians in the past half century, as Robinson suggests (1966, 250), I feel that the legitimate grounds for such a rehabilitation are to be found in the Leibnizian theory itself.”—(Bos 1974–1975, 82–83).   相似文献   

19.
A newly emerged field within economics, known as geographical economics, claims to have provided a unified approach to the study of spatial agglomerations at different spatial scales by showing how these can be traced back to the same basic economic mechanisms. We analyse this contemporary episode of explanatory unification in relation to major philosophical accounts of unification. In particular, we examine the role of argument patterns in unifying derivations, the role of ontological convictions and mathematical structures in shaping unification, the distinction between derivational and ontological unification, the issue of how explanation and unification relate, and finally the idea that unification comes in degrees.  相似文献   

20.
I explore how the nature, scope, and limits of the knowledge obtained in orbital dynamics has changed in recent years. Innovations in the design of spacecraft trajectories, as well as in astronomy, have led to new logics of theory-testing—that is, new research methodologies—in orbital dynamics. These methodologies—which combine resonance overlap theories, numerical experiments, and the implementation of space missions—were developed in response to the discovery of chaotic dynamical systems in our solar system. In the past few decades, they have replaced the methodology that dominated orbital research in the centuries following Newton's Principia. As a result, the kind of knowledge achieved by orbital research has changed: we can know how orbiting bodies in chaotic systems behave, but only over sufficiently short time scales; and we can reliably measure those temporal limitations, using Lyapunov time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号