共查询到20条相似文献,搜索用时 31 毫秒
1.
Martin Niss 《Archive for History of Exact Sciences》2009,63(3):243-287
This is the second in a series of three papers that charts the history of the Lenz–Ising model (commonly called just the Ising
model in the physics literature) in considerable detail, from its invention in the early 1920s to its recognition as an important
tool in the study of phase transitions by the late 1960s. By focusing on the development in physicists’ perception of the
model’s ability to yield physical insight—in contrast to the more technical perspective in previous historical accounts, for
example, Brush (Rev Modern Phys 39: 883–893, 1967) and Hoddeson et al. (Out of the Crystal Maze. Chapters from the History
of Solid-State Physics. Oxford University Press, New York, pp. 489–616, 1992)—the series aims to cover and explain in depth
why this model went from relative obscurity to a prominent position in modern physics, and to examine the consequences of
this change. In the present paper, which is self-contained, I deal with the development from the early 1950s to the 1960s
and document that this period witnessed a major change in the perception of the model: In the 1950s it was not in the cards
that the model was to become a pivotal tool of theoretical physics in the following decade. In fact, I show, based upon recollections
and research papers, that many of the physicists in the 1950s interested in understanding phase transitions saw the model
as irrelevant for this endeavor because it oversimplifies the nature of the microscopic constituents of the physical systems
exhibiting phase transitions. However, one group, Cyril Domb’s in London, held a more positive view during this decade. To
bring out the basis for their view, I analyze in detail their motivation and work. In the last part of the paper I document
that the model was seen as much more physically relevant in the early 1960s and examine the development that led to this change
in perception. I argue that the main factor behind the change was the realization of the surprising and striking agreement
between aspects of the model, notably its critical behavior, and empirical features of the physical phenomena. 相似文献
2.
Shaul Katzir 《Archive for History of Exact Sciences》2008,62(5):469-487
In 1918–1919 Walter G. Cady was the first to recognize the significant electrical consequences of the fact that piezoelectric
crystals resonate at very sharp, precise and stable frequencies. Cady was also the first to suggest the employment of these
properties, first as frequency standards and then to control frequencies of electric circuits—an essential component in electronic
technology. Cady’s discovery originated in the course of research on piezoelectric ultrasonic devices for submarine detection
(sonar) during World War I. However, for the discovery Cady had to change his research programme to crystal resonance. This
change followed Cady’s experimental findings and the scientific curiosity that they raised, and was helped by the termination
of the war. Cady’s transition was also a move from “applied” research, aimed at improving a specific technology, to “pure”
research lacking a clear practical aim. This article examines how Cady reached the discovery and his early ideas for its use.
It shows that the discovery was not an instantaneous but a gradual achievement. It further suggests that disinterested “scientific”
research (rather than “engineering” research) was needed in this process, while research aimed at design was required for
the subsequent development of technological devices.
I am very grateful to Chris McGahey for providing me with his research notes taken from Walter Cady’s diaries kept by the
Rhode Island Historical Society, henceforth Diaries. I would like to thank Aharon (Arkee) Eviatar for linguistic comments, Ido Yavetz for our helpful discussion and Jed Buchwald
for his thoughtful comments and editorial work. I thank the Lemelson Center in the National Museum for American History for
a grant that enabled me to study Walter Guyton Cady Papers, 1903–1974, Archives Center, National Museum of American History
(henceforth, ACNMAH) and the staff of the center, especially Alison Oswald, for their help. The following abbreviations are
used: NB—Cady’s research notebooks kept at ACNMAH, AIP - Niels Bohr Library, American Institute of Physics, Cady’s dossier. 相似文献
3.
From Problems to Structures: the Cousin Problems and the Emergence of the Sheaf Concept 总被引:1,自引:1,他引:0
Renaud Chorlay 《Archive for History of Exact Sciences》2010,64(1):1-73
Historical work on the emergence of sheaf theory has mainly concentrated on the topological origins of sheaf cohomology in the period from 1945 to 1950 and on subsequent developments. However, a shift of emphasis both in time-scale and disciplinary
context can help gain new insight into the emergence of the sheaf concept. This paper concentrates on Henri Cartan’s work in the theory of analytic functions of several complex variables and the
strikingly different roles it played at two stages of the emergence of sheaf theory: the definition of a new structure and
formulation of a new research programme in 1940–1944; the unexpected integration into sheaf cohomology in 1951–1952. In order
to bring this two-stage structural transition into perspective, we will concentrate more specifically on a family of problems,
the so-called Cousin problems, from Poincaré (1883) to Cartan. This medium-term narrative provides insight into two more general
issues in the history of contemporary mathematics. First, we will focus on the use of problems in theory-making. Second, the
history of the design of structures in geometrically flavoured contexts—such as for the sheaf and fibre-bundle structures—which
will help provide a more comprehensive view of the structuralist moment, a moment whose algebraic component has so far been the main focus for historical work. 相似文献
4.
Einstein’s quantum theory of the monatomic ideal gas: non-statistical arguments for a new statistics
In this article, we analyze the third of three papers, in which Einstein presented his quantum theory of the ideal gas of
1924–1925. Although it failed to attract the attention of Einstein’s contemporaries and although also today very few commentators
refer to it, we argue for its significance in the context of Einstein’s quantum researches. It contains an attempt to extend
and exhaust the characterization of the monatomic ideal gas without appealing to combinatorics. Its ambiguities illustrate
Einstein’s confusion with his initial success in extending Bose’s results and in realizing the consequences of what later
came to be called Bose–Einstein statistics. We discuss Einstein’s motivation for writing a non-combinatorial paper, partly
in response to criticism by his friend Ehrenfest, and we paraphrase its content. Its arguments are based on Einstein’s belief
in the complete analogy between the thermodynamics of light quanta and of material particles and invoke considerations of
adiabatic transformations as well as of dimensional analysis. These techniques were well known to Einstein from earlier work
on Wien’s displacement law, Planck’s radiation theory and the specific heat of solids. We also investigate the possible role
of Ehrenfest in the gestation of the theory. 相似文献
5.
Thomas Hawkins 《Archive for History of Exact Sciences》2008,62(6):655-717
The theory of nonnegative matrices is an example of a theory motivated in its origins and development by purely mathematical
concerns that later proved to have a remarkably broad spectrum of applications to such diverse fields as probability theory,
numerical analysis, economics, dynamical programming, and demography. At the heart of the theory is what is usually known
as the Perron–Frobenius Theorem. It was inspired by a theorem of Oskar Perron on positive matrices, usually called Perron’s
Theorem. This paper is primarily concerned with the origins of Perron’s Theorem in his masterful work on ordinary and generalized
continued fractions (1907) and its role in inspiring the remarkable work of Frobenius on nonnegative matrices (1912) that
produced, inter alia, the Perron–Frobenius Theorem. The paper is not at all intended exclusively for readers with expertise
in the theory of nonnegative matrices. Anyone with a basic grounding in linear algebra should be able to read this article
and come away with a good understanding of the Perron–Frobenius Theorem as well as its historical origins. The final section
of the paper considers the first major application of the Perron–Frobenius Theorem, namely, to the theory of Markov chains.
When he introduced the eponymous chains in 1908, Markov adumbrated several key notions and results of the Perron–Frobenius
theory albeit within the much simpler context of stochastic matrices; but it was by means of Frobenius’ 1912 paper that the
linear algebraic foundations of Markov’s theory for nonpositive stochastic matrices were first established by R. Von Mises
and V.I. Romanovsky. 相似文献
6.
We reconstruct essential features of Lagrange’s theory of analytical functions by exhibiting its structure and basic assumptions,
as well as its main shortcomings. We explain Lagrange’s notions of function and algebraic quantity, and we concentrate on
power-series expansions, on the algorithm for derivative functions, and the remainder theorem—especially on the role this
theorem has in solving geometric and mechanical problems. We thus aim to provide a better understanding of Enlightenment mathematics
and to show that the foundations of mathematics did not, for Lagrange, concern the solidity of its ultimate bases, but rather
purity of method—the generality and internal organization of the discipline. 相似文献
7.
Reinhard Siegmund-Schultze 《Archive for History of Exact Sciences》2006,60(5):431-515
The correspondence between Richard von Mises and George Pólya of 1919/20 contains reflections on two well-known articles by
von Mises on the foundations of probability in the Mathematische Zeitschrift of 1919, and one paper from the Physikalische Zeitschrift of 1918. The topics touched on in the correspondence are: the proof of the central limit theorem of probability theory, von
Mises' notion of randomness, and a statistical criterion for integer-valuedness of physical data. The investigation will hint
at both the fruitfulness and the limits of several of von Mises' notions such as ``collective', ``distribution' and ``complex
adjuncts' (characteristic functions) for further developments in probability theory and in ``directional statistics'. By
pointing to the selectiveness of Pólya's criticism, the historical analysis shows the differing expectations of the two men
with respect to the further development of the theory of probability and its applications. The paper thus gives a glimpse
of the provisional state of the theory around 1920, before others such as P. Lévy (1886–1971) and A. N. Kolmogorov (1903–1987)
stepped in and created a new paradigm for probability theory. 相似文献
8.
According to the received view, the first spyglass was assembled without any theory of how the instrument magnifies. Galileo,
who was the first to use the device as a scientific instrument, improved the power of magnification up to 30 times. How did
he accomplish this feat? Galileo does not tell us what he did. We hold that such improvement of magnification is too intricate
a problem to be solved by trial and error, accidentally stumbling upon a complex procedure. We construct a plausibility argument
and submit that Galileo had a theory of the telescope. He could develop it by analogical reasoning based on the phenomenon
of reflection in mirrors—as it was put to use in surveying instruments—and applied to refraction in sets of lenses. Galileo
could appeal to this analogy and assume Della Porta’s theory of refraction. He could thus turn the spyglass into a revolutionary
scientific instrument—the telescope. 相似文献
9.
Steffen Ducheyne 《Archive for History of Exact Sciences》2011,65(2):181-227
This article seeks to provide a historically well-informed analysis of an important post-Newtonian area of research in experimental
physics between 1798 and 1898, namely the determination of the mean density of the earth and, by the end of the nineteenth
century, the gravitational constant. Traditionally, research on these matters is seen as a case of “puzzle solving.” In this
article, the author shows that such focus does not do justice to the evidential significance of eighteenth- and nineteenth-century
experimental research on the mean density of the earth and the gravitational constant. As Newton’s theory of universal gravitation
was mainly based on astronomical observation, it remained to be shown that Newton’s law of universal gravitation did not break
down at terrestrial distances. In this context, Cavendish’ experiment and related nineteenth-century experiments played a
decisive role, for they provided converging and increasingly stronger evidence for the universality of Newton’s theory of
gravitation. More precisely, the author shall argue that, as the accuracy and precision of the experimental apparatuses and
the procedures to eliminate external disturbances involved increasingly improved, the empirical support for the universality
of Newton’s theory of gravitation improved correspondingly. 相似文献
10.
11.
Maarten Bullynck 《Archive for History of Exact Sciences》2009,63(5):553-580
C.F Gauss’s computational work in number theory attracted renewed interest in the twentieth century due to, on the one hand,
the edition of Gauss’s Werke, and, on the other hand, the birth of the digital electronic computer. The involvement of the U.S. American mathematicians
Derrick Henry Lehmer and Daniel Shanks with Gauss’s work is analysed, especially their continuation of work on topics as arccotangents,
factors of n
2 + a
2, composition of binary quadratic forms. In general, this strand in Gauss’s reception is part of a more general phenomenon,
i.e. the influence of the computer on mathematics and one of its effects, the reappraisal of mathematical exploration.
I would like to thank the Alexander-von-Humboldt-Stiftung for funding this research. For their comments I would like to thank
Catherine Goldstein, Norbert Schappacher and especially John Brillhart. 相似文献
12.
A microorganism has to adapt to changing environmental conditions in order to survive. Cells could follow one of two basic
strategies to address such environmental fluctuations. On the one hand, cells could anticipate a fluctuating environment by
spontaneously generating a phenotypically diverse population of cells, with each subpopulation exhibiting different capacities
to flourish in the different conditions. Alternatively, cells could sense changes in the surrounding conditions – such as
temperature, nutritional availability or the presence of other individuals – and modify their behavior to provide an appropriate
response to that information. As we describe, examples of both strategies abound among different microorganisms. Moreover,
successful application of either strategy requires a level of memory and information processing that has not been normally
associated with single cells, suggesting that such organisms do in fact have the capacity to ‘think’.
Received 3 January 2007; accepted 4 April 2007 相似文献
13.
Zinkernagel RM 《Cellular and molecular life sciences : CMLS》2012,69(10):1635-1640
So-called ‘immunological memory’ is, in my view, a typical example where a field of enquiry, i.e. to understand long-term
protection to survive reexposure to infection, has been overtaken by ‘l’art pour l’art’ of ‘basic immunology’. The aim of
this critical review is to point out some key differences between academic text book-defined immunological memory and protective
immunity as viewed from a co-evolutionary point of view, both from the host and the infectious agents. A key conclusion is
that ‘immunological memory’ of course exists, but only in particular experimental laboratory models measuring ‘quicker and
better’ responses after an earlier immunization. These often do correlate with, but are not the key mechanisms of, protection.
Protection depends on pre-existing neutralizing antibodies or pre-activated T cells at the time of infection—as documented
by the importance of maternal antibodies around birth for survival of the offspring. Importantly, both high levels of antibodies
and of activated T cells are antigen driven. This conclusion has serious implications for our thinking about vaccines and
maintaining a level of protection in the population to deal with old and new infectious diseases. 相似文献
14.
15.
16.
In 1905, Albert Einstein proposed that the forces that cause the random Brownian motion of a particle also underlie the resistance
to macroscopic motion when a force is applied. This insight, of a coupling between fluctuation (stochastic behavior) and responsiveness
(non-stochastic behavior), founded an important branch of physics. Here we argue that his insight may also be relevant for
understanding evolved biological systems, and we present a ‘fluctuation–response relationship’ for biology. The relationship
is consistent with the idea that biological systems are similarly canalized to stochastic, environmental, and genetic perturbations.
It is also supported by in silico evolution experiments, and by the observation that ‘noisy’ gene expression is often both
more responsive and more ‘evolvable’. More generally, we argue that in biology there is (and always has been) an important
role for macroscopic theory that considers the general behavior of systems without concern for their intimate molecular details. 相似文献
17.
Athanase Papadopoulos 《Archive for History of Exact Sciences》2017,71(4):319-336
Nicolas-Auguste Tissot (1824–1897) published a series of papers on cartography in which he introduced a tool which became known later on, among geographers, under the name of the Tissot indicatrix. This tool was broadly used during the twentieth century in the theory and in the practical aspects of the drawing of geographical maps. The Tissot indicatrix is a graphical representation of a field of ellipses on a map that describes its distortion. Tissot studied extensively, from a mathematical viewpoint, the distortion of mappings from the sphere onto the Euclidean plane that are used in drawing geographical maps, and more generally he developed a theory for the distortion of mappings between general surfaces. His ideas are at the heart of the work on quasiconformal mappings that was developed several decades after him by Grötzsch, Lavrentieff, Ahlfors and Teichmüller. Grötzsch mentions the work of Tissot, and he uses the terminology related to his name (in particular, Grötzsch uses the Tissot indicatrix). Teichmüller mentions the name of Tissot in a historical section in one of his fundamental papers where he claims that quasiconformal mappings were used by geographers, but without giving any hint about the nature of Tissot’s work. The name of Tissot is missing from all the historical surveys on quasiconformal mappings. In the present paper, we report on this work of Tissot. We shall mention some related works on cartography, on the differential geometry of surfaces, and on the theory of quasiconformal mappings. This will place Tissot’s work in its proper context. 相似文献
18.
Polyphenolic phytochemicals are ubiquitous in plants, in which they function in various protective roles. A ‘recommended’
human diet contains significant quantities of polyphenolics, as they have long been assumed to be ‘antioxidants’ that scavenge
excessive, damaging, free radicals arising from normal metabolic processes. There is recent evidence that polyphenolics also
have ‘indirect’ antioxidant effects through induction of endogenous protective enzymes. There is also increasing evidence
for many potential benefits through polyphenolic-mediated regulation of cellular processes such as inflammation. Inductive
or signalling effects may occur at concentrations much lower than required for effective radical scavenging. Over the last
2 – 3 years, there have been many exciting new developments in the elucidation of the in vivo mechanisms of the health benefits of polyphenolics. We summarise the current knowledge of the intake, bio-availability and
metabolism of polyphenolics, their antioxidant effects, regulatory effects on signalling pathways, neuro-protective effects
and regulatory effects on energy metabolism and gut health.
Received 14 May 2007; received after revision 27 June 2007; accepted 24 July 2007 相似文献
19.
The application of fractal dimension-based constructs to probe the protein interior dates back to the development of the concept
of fractal dimension itself. Numerous approaches have been tried and tested over a course of (almost) 30 years with the aim
of elucidating the various facets of symmetry of self-similarity prevalent in the protein interior. In the last 5 years especially,
there has been a startling upsurge of research that innovatively stretches the limits of fractal-based studies to present
an array of unexpected results on the biophysical properties of protein interior. In this article, we introduce readers to
the fundamentals of fractals, reviewing the commonality (and the lack of it) between these approaches before exploring the
patterns in the results that they produced. Clustering the approaches in major schools of protein self-similarity studies,
we describe the evolution of fractal dimension-based methodologies. The genealogy of approaches (and results) presented here
portrays a clear picture of the contemporary state of fractal-based studies in the context of the protein interior. To underline
the utility of fractal dimension-based measures further, we have performed a correlation dimension analysis on all of the
available non-redundant protein structures, both at the level of an individual protein and at the level of structural domains.
In this investigation, we were able to separately quantify the self-similar symmetries in spatial correlation patterns amongst
peptide–dipole units, charged amino acids, residues with the π-electron cloud and hydrophobic amino acids. The results revealed
that electrostatic environments in the interiors of proteins belonging to ‘α/α toroid’ (all-α class) and ‘PLP-dependent transferase-like’
domains (α/β class) are highly conducive. In contrast, the interiors of ‘zinc finger design’ (‘designed proteins’) and ‘knottins’
(‘small proteins’) were identified as folds with the least conducive electrostatic environments. The fold ‘conotoxins’ (peptides)
could be unambiguously identified as one type with the least stability. The same analyses revealed that peptide–dipoles in
the α/β class of proteins, in general, are more correlated to each other than are the peptide–dipoles in proteins belonging
to the all-α class. Highly favorable electrostatic milieu in the interiors of TIM-barrel, α/β-hydrolase structures could explain
their remarkably conserved (evolutionary) stability from a new light. Finally, we point out certain inherent limitations of
fractal constructs before attempting to identify the areas and problems where the implementation of fractal dimension-based
constructs can be of paramount help to unearth latent information on protein structural properties. 相似文献
20.
Enric Pérez 《Archive for History of Exact Sciences》2009,63(1):81-125
I discuss in detail the contents of the adiabatic hypothesis, formulated by Ehrenfest in 1916. I focus especially on the paper
he published in 1916 and 1917 in three different journals. I briefly review its precedents and thoroughly analyze its reception
until 1918, including Burgers’s developments and Bohr’s assimilation of them into his own theory. I show that until 1918 the
adiabatic hypothesis did not play an important role in the development of quantum theory.
An erratum to this article can be found at 相似文献