共查询到20条相似文献,搜索用时 601 毫秒
1.
Bruce Pourciau 《Archive for History of Exact Sciences》2003,57(4):267-311
The first proposition of the Principia records two fundamental properties of an orbital motion: the Fixed Plane Property (that the orbit lies in a fixed plane)
and the Area Property (that the radius sweeps out equal areas in equal times). Taking at the start the traditional view, that
by an orbital motion Newton means a centripetal motion – this is a motion ``continually deflected from the tangent toward
a fixed center' – we describe two serious flaws in the Principia's argument for Proposition 1, an argument based on a polygonal impulse approximation. First, the persuasiveness of the argument
depends crucially on the validity of the Impulse Assumption: that every centripetal motion can be represented as a limit of polygonal impulse motions. Yet Newton tacitly takes the Impulse Assumption for granted. The resulting gap in the argument for Proposition 1 is serious,
for only a nontrivial analysis, involving the careful estimation of accumulating local errors, verifies the Impulse Assumption.
Second, Newton's polygonal approximation scheme has an inherent and ultimately fatal disability: it does not establish nor
can it be adapted to establish the Fixed Plane Property. Taking then a different view of what Newton means by an orbital motion
– namely that an orbital motion is by definition a limit of polygonal impulse motions – we show in this case that polygonal approximation can be used to establish both the fixed plane and area properties without too much trouble, but that Newton's own argument still
has flaws. Moreover, a crucial question, haunted by error accumulation and planarity problems, now arises: How plentiful are
these differently defined orbital motions? Returning to the traditional view, that Newton's orbital motions are by definition
centripetal motions, we go on to give three proofs of the Area Property which Newton ``could have given' – two using polygonal
approximation and a third using curvature – as well as a proof of the Fixed Plane Property which he ``almost could have given.'
(Received August 14, 2002)
Published online March 26, 2003
Communicated by G. Smith 相似文献
2.
Melvin Cohn 《Cellular and molecular life sciences : CMLS》2010,67(17):2851-2862
This essay was written to illustrate how one might think about the immune system. The formulation of valid theories is the
basic component of how-to-think because the reduction of large and complex data sets by the use of logic into a succinct model
with predictability and explanatory power, is the only way that we have to arrive at “understanding”. Whether it is to achieve
effective manipulation of the system or for pure pleasure, “understanding” is a universally agreed upon goal. It is in the
nature of science that theories are there to be disproven. An experimentally disproven theory is a successful one. As they
fail experimental test one by one, we end up with a default theory, that is, one that has yet to fail. Here, using the self–nonself
discrimination as an example, how-to-think as I see it, will be illustrated. 相似文献
3.
J. Bruce Brackenridge 《Archive for History of Exact Sciences》2003,57(4):313-336
In the 1687 Principia, Newton gave a solution to the direct problem (given the orbit and center of force, find the central force) for a conic-section
with a focal center of force (answer: a reciprocal square force) and for a spiral orbit with a polar center of force (answer:
a reciprocal cube force). He did not, however, give solutions for the two corresponding inverse problems (given the force
and center of force, find the orbit). He gave a cryptic solution to the inverse problem of a reciprocal cube force, but offered no solution for the reciprocal square force. Some take this omission as an indication that Newton could not solve the reciprocal square, for, they ask, why else
would he not select this important problem? Others claim that ``it is child's play' for him, as evidenced by his 1671 catalogue
of quadratures (tables of integrals). The answer to that question is obscured for all who attempt to work through Newton's
published solution of the reciprocal cube force because it is done in the synthetic geometric style of the 1687 Principia rather than in the analytic algebraic style that Newton employed until 1671. In response to a request from David Gregory
in 1694, however, Newton produced an analytic version of the body of the proof, but one which still had a geometric conclusion.
Newton's charge is to find both ``the orbit' and ``the time in orbit.' In the determination of the dependence of the time on orbital position, t(r), Newton
evaluated an integral of the form ∫dx/x
n
to calculate a finite algebraic equation for the area swept out as a function of the radius, but he did not write out the
analytic expression for time t = t(r), even though he knew that the time t is proportional to that area. In the determination
of the orbit, θ (r), Newton obtained an integral of the form ∫dx/√(1−x2) for the area that is proportional to the angle θ, an integral he had shown in his 1669 On Analysis by Infinite Equations to be equal to the arcsin(x). Since the solution must therefore contain a transcendental function, he knew that a finite
algebraic solution for θ=θ(r) did not exist for ``the orbit' as it had for ``the time in orbit.' In contrast to these two
solutions for the inverse cube force, however, it is not possible in the inverse square solution to generate a finite algebraic
expression for either ``the orbit' or ``the time in orbit.' In fact, in Lemma 28, Newton offers a demonstration that the
area of an ellipse cannot be given by a finite equation. I claim that the limitation of Lemma 28 forces Newton to reject the
inverse square force as an example and to choose instead the reciprocal cube force as his example in Proposition 41.
(Received August 14, 2002)
Published online March 26, 2003
Communicated by G. Smith 相似文献
4.
Jeffrey A. Oaks 《Archive for History of Exact Sciences》2009,63(2):169-203
It is shown in this article that the two sides of an equation in the medieval Arabic algebra are aggregations of the algebraic
“numbers” (powers) with no operations present. Unlike an expression such as our 3x + 4, the Arabic polynomial “three things and four dirhams” is merely a collection of seven objects of two different types.
Ideally, the two sides of an equation were polynomials so the Arabic algebraists preferred to work out all operations of the
enunciation to a problem before stating an equation. Some difficult problems which involve square roots and divisions cannot
be handled nicely by this basic method, so we do find square roots of polynomials and expressions of the form “A divided by B” in some equations. But rather than initiate a reconsideration of the notion of equation, these developments were used only
for particularly complex problems. Also, the algebraic notation practiced in the Maghreb in the later middle ages was developed
with the “aggregations” interpretation in mind, so it had no noticeable impact on the concept of polynomial. Arabic algebraists
continued to solve problems by working operations before setting up an equation to the end of the medieval period.
I thank Mahdi Abdeljaouad, who provided comments on an earlier version of this paper, and Haitham Alkhateeb, for his help
with some of the translations.
Notes on references: When page numbers are separated by a “ / ”, the first number is to the Arabic text, and the second to
the translation. Also, a semicolon separates page number from line number. Example: [Al-Khwārizmī, 1831, 31;6/43] refers to
page 31 line 6 of the Arabic text, and page 43 of the translation. 相似文献
5.
The continuing disappearance of “pure” Ca2+ buffers 总被引:1,自引:1,他引:0
B. Schwaller 《Cellular and molecular life sciences : CMLS》2009,66(2):275-300
Advances in the understanding of a class of Ca2+-binding proteins usually referred to as “Ca2+ buffers” are reported. Proteins historically embraced within this group include parvalbumins (α and β), calbindin-D9k, calbindin-D28k
and calretinin. Within the last few years a wealth of data has accumulated that allow a better understanding of the functions
of particular family members of the >240 identified EF-hand Ca2+-binding proteins encoded by the human genome. Studies often involving transgenic animal models have revealed that they exert
their specific functions within an intricate network consisting of many proteins and cellular mechanisms involved in Ca2+ signaling and Ca2+ homeostasis, and are thus an essential part of the Ca2+ homeostasome. Recent results indicate that calbindin-D28k, possibly also calretinin and oncomodulin, the mammalian β parvalbumin,
might have additional Ca2+ sensor functions, leaving parvalbumin and calbindin-D9k as the only “pure” Ca2+ buffers.
Received 10 September 2008; received after revision 15 October 2008; accepted 4 November 2008 相似文献
6.
M. Nauenberg 《Archive for History of Exact Sciences》2010,64(3):269-300
The translation of Newton’s geometrical Propositions in the Principia into the language of the differential calculus in the form developed by Leibniz and his followers has been the subject of
many scholarly articles and books. One of the most vexing problems in this translation concerns the transition from the discrete
polygonal orbits and force impulses in Prop. 1 to the continuous orbits and forces in Prop. 6. Newton justified this transition
by lemma 1 on prime and ultimate ratios which was a concrete formulation of a limit, but it took another century before this
concept was established on a rigorous mathematical basis. This difficulty was mirrored in the newly developed calculus which
dealt with differentials that vanish in this limit, and therefore were considered to be fictional quantities by some mathematicians.
Despite these problems, early practitioners of the differential calculus like Jacob Hermann, Pierre Varignon, and Johann Bernoulli
succeeded without apparent difficulties in applying the differential calculus to the solution of the fundamental problem of
orbital motion under the action of inverse square central forces. By following their calculations and describing some essential
details that have been ignored in the past, I clarify the reason why the lack of rigor in establishing the continuum limit
was not a practical problem. 相似文献
7.
The aim of this paper is to examine the work of Tschirnhaus, La Hire and Leibniz on the theory of caustics, a subject whose history is closely linked to geometrical optics. The curves in question were examined by the most eminent mathematicians of the 17th century such as Huygens, Barrow and Newton and were subsequently studied analytically from the time of Tschirnhaus until the 19th century.Leibniz was interested in caustics and the subject probably inspired him in his discovery of the concept of envelopes of lines. 相似文献
8.
Stewart Duncan 《Studies in history and philosophy of science》2010,41(1):11-18
This paper discusses Leibniz’s interpretation and criticism of Hobbesian materialism in the period 1698-1705. Leibniz had continued to be interested in Hobbes’s work, despite not engaging with it as intensively as he did earlier (around 1670). Leibniz offers an interpretation of Hobbes that explains Hobbes’s materialism as derived from his imagistic theory of ideas. Leibniz then criticizes Hobbes’s view as being based on a faulty theory of ideas, and as having problematic consequences, particularly with regard to what one says about God. Some of this criticism is found in the New essays, but equally significant is Leibniz’s correspondence with Damaris Masham, who proposed an argument for materialism very much like that which Leibniz attributed to Hobbes. The paper concludes by discussing the suggestion that Leibniz at this time, particularly in the New essays, himself adopted Hobbesian ideas. Though Leibniz did use some of Hobbes’s examples, and did think at this time that all souls were associated with bodies, the resulting position is still rather distant from Hobbesian materialism. 相似文献
9.
10.
Sreerupa Challa Francis Ka-Ming Chan 《Cellular and molecular life sciences : CMLS》2010,67(19):3241-3253
Recent evidence indicates that cell death can be induced through multiple mechanisms. Strikingly, the same death signal can
often induce apoptotic as well as non-apoptotic cell death. For instance, inhibition of caspases often converts an apoptotic
stimulus to one that causes necrosis. Because a dedicated molecular circuitry distinct from that controlling apoptosis is
required for necrotic cell injury, terms such as “programmed necrosis” or “necroptosis” have been used to distinguish stimulus-dependent
necrosis from those induced by non-specific traumas (e.g., heat shock) or secondary necrosis induced as a consequence of apoptosis.
In several experimental models, programmed necrosis/necroptosis has been shown to be a crucial control point for pathogen-
or injury-induced inflammation. In this review, we will discuss the molecular mechanisms that regulate programmed necrosis/necroptosis
and its biological significance in pathogen infections, drug-induced cell injury, and trauma-induced tissue damage. 相似文献
11.
Lage H 《Cellular and molecular life sciences : CMLS》2008,65(20):3145-3167
Although various mechanisms involved in anticancer multidrug resistance (MDR) can be identified, it remains a major problem
in oncology. Beyond that, the introduction of new “targeted” drugs have not solved the problem. On the contrary, it has been
demonstrated that the “classical” MDR-associated mechanisms are similar or identical to those causing resistance to these
novel agents. These mechanisms include the enhanced activity of drug pumps, i.e. ABC or alternative transporters; modulation
of cellular death pathways; alteration and repair of target molecules; and various less common mechanisms. Together they build
a complex network of cellular pathways and molecular mechanisms mediating an individual MDR phenotype. Although the application
of new high throughput “-omics” technologies have identified multiple new gene-/protein expression signatures or factors associated
with drug resistance, so far none of these findings has been useful for creating improved diagnostic assays, for prediction
of individual therapy response, or for development of updated chemosensitizers.
Received 05 March 2008; received after revision 21 May 2008; accepted 23 May 2008 相似文献
12.
Cohn M 《Cellular and molecular life sciences : CMLS》2012,69(3):405-412
Jacques Monod used to say, “Never trust an experiment that is not supported by a good theory.” Theory or conceptualization
permits us to put order or structure into a vast amount of data in a way that increases understanding. Validly competing theories
are most useful when they make testably disprovable predictions. Illustrating the theory–experiment interaction is the goal
of this exercise. Stated bleakly, the answers derived from the theory-based experiments described here would impact dramatically
on how we understand immune behavior. 相似文献
13.
Shaul Katzir 《Archive for History of Exact Sciences》2008,62(5):469-487
In 1918–1919 Walter G. Cady was the first to recognize the significant electrical consequences of the fact that piezoelectric
crystals resonate at very sharp, precise and stable frequencies. Cady was also the first to suggest the employment of these
properties, first as frequency standards and then to control frequencies of electric circuits—an essential component in electronic
technology. Cady’s discovery originated in the course of research on piezoelectric ultrasonic devices for submarine detection
(sonar) during World War I. However, for the discovery Cady had to change his research programme to crystal resonance. This
change followed Cady’s experimental findings and the scientific curiosity that they raised, and was helped by the termination
of the war. Cady’s transition was also a move from “applied” research, aimed at improving a specific technology, to “pure”
research lacking a clear practical aim. This article examines how Cady reached the discovery and his early ideas for its use.
It shows that the discovery was not an instantaneous but a gradual achievement. It further suggests that disinterested “scientific”
research (rather than “engineering” research) was needed in this process, while research aimed at design was required for
the subsequent development of technological devices.
I am very grateful to Chris McGahey for providing me with his research notes taken from Walter Cady’s diaries kept by the
Rhode Island Historical Society, henceforth Diaries. I would like to thank Aharon (Arkee) Eviatar for linguistic comments, Ido Yavetz for our helpful discussion and Jed Buchwald
for his thoughtful comments and editorial work. I thank the Lemelson Center in the National Museum for American History for
a grant that enabled me to study Walter Guyton Cady Papers, 1903–1974, Archives Center, National Museum of American History
(henceforth, ACNMAH) and the staff of the center, especially Alison Oswald, for their help. The following abbreviations are
used: NB—Cady’s research notebooks kept at ACNMAH, AIP - Niels Bohr Library, American Institute of Physics, Cady’s dossier. 相似文献
14.
Penha Maria Cardoso Dias 《Archive for History of Exact Sciences》1999,54(1):67-86
In 1751, LEONHARD EULER established “harmony” between two principles that had been stated by PIERRE-LOUIS-MOREAU DE MAUPERTUIS a few years earlier. These principles are intended to be the foundations of Mechanics; they are the principle of rest and the principle of least action. My claim is that the way in which “harmony” is achieved sets the foundations of so called Analytical Mechanics: it discloses the physical bases of the general ideas, concepts, and motivations of the formalism. My paper intends to show what those physical bases are, and how a picture of the formalism issues from them. This picture is shown to be recast in JOSEPH-LOUIS LAGRANGES justification of the formalism, which strengthens my claim. 相似文献
15.
A letter written by Christiaan Huygens to David Gregory (19 January 1694) is published here for the first time. After an introduction about the contacts between the two correspondents, an annotated English translation of the letter is given. The letter forms part of the wider correspondence about the ‘new calculus’, in which L'Hospital and Leibniz also participated, and gives some new evidence about Huygens's ambivalent attitude towards the new developments. Therefore, two mathematical passages in the letter are discussed separately. An appendix contains the original Latin text. 相似文献
16.
17.
In this paper we will try to explain how Leibniz justified the idea of an exact arithmetical quadrature. We will do this by comparing Leibniz's exposition with that of John Wallis. In short, we will show that the idea of exactitude in matters of quadratures relies on two fundamental requisites that, according to Leibniz, the infinite series have, namely, that of regularity and that of completeness. In the first part of this paper, we will go deeper into three main features of Leibniz's method, that is: it is an infinitesimal method, it looks for an arithmetical quadrature and it proposes a result that is not approximate, but exact. After that, we will deal with the requisite of the regularity of the series, pointing out that, unlike the inductive method proposed by Wallis, Leibniz propounded some sort of intellectual recognition of what is invariant in the series. Finally, we will consider the requisite of completeness of the series. We will see that, although both Wallis and Leibniz introduced the supposition of completeness, the German thinker went beyond the English mathematician, since he recognized that it is not necessary to look for a number for the quadrature of the circle, given that we have a series that is equal to the area of that curvilinear figure. 相似文献
18.
T. Hubert S. Grimal P. Carroll A. Fichard-Carroll 《Cellular and molecular life sciences : CMLS》2009,66(7):1223-1238
Collagens are extracellular proteins characterized by a structure in triple helices. There are 28 collagen types which differ
in size, structure and function. Their architectural and functional roles in connective tissues have been widely assessed.
In the nervous system, collagens are rare in the vicinity of the neuronal soma, occupying mostly a “marginal” position, such
as the meninges, the basement membranes and the sensory end organs. In neural development, however, where various ECM molecules
are known to be determinant, recent studies indicate that collagens are no exception, participating in axonal guidance, synaptogenesis
and Schwann cell differentiation. Insights on collagens function in the brain have also been derived from neural pathophysiological
conditions. This review summarizes the significant advances which underscore the function and importance of collagens in the
nervous system.
Received 09 September 2008; received after revision 24 October 2008; accepted 28 October 2008 相似文献
19.
W. C. Wimsatt 《Archive for History of Exact Sciences》2012,66(4):359-396
A square tabular array was introduced by R. C. Punnett in (1907) to visualize systematically and economically the combination of gametes to make genotypes according to Mendel’s theory. This mode of representation evolved and rapidly became standardized as the canonical way of representing like problems in genetics. Its advantages over other contemporary methods are discussed, as are ways in which it evolved to increase its power and efficiency, and responded to changing theoretical perspectives. It provided a natural visual decomposition of a complex problem into a number of inter-related stages. This explains its computational and conceptual power, for one could simply “read off” answers to a wide variety of questions simply from the “right” visual representation of the problem, and represent multiple problems, and multiple layers of problems in the same diagram. I relate it to prior work on the evolution of Weismann diagrams by Griesemer and Wimsatt (What Philosophy of Biology Is, Martinus-Nijhoff, the Hague, 1989), and discuss a crucial change in how it was interpreted that midwifed its success. 相似文献
20.
Sahotra Sarkar 《Archive for History of Exact Sciences》2012,66(4):397-426
This paper reconstructs the history of the introduction and use of iterative algorithms in conservation biology in the 1980s
and early 1990s in order to prioritize areas for protection as nature reserves. The importance of these algorithms was that
they led to greater economy in spatial extent (“efficiency”) in the selection of areas to represent biological features adequately
(that is, to a specified level) compared to older methods of scoring and ranking areas using criteria such as biotic “richness”
(the number of features of interest). The development of these algorithms was critical to producing a research program for
conservation biology that was distinct from ecology and eventually led to what came to be called systematic conservation planning.
Very similar algorithmic approaches were introduced independently in the 1980–1990 period in Australia, South Africa, and
(arguably) the United Kingdom. The key rules in these algorithms were the use of rarity and what came to be called complementarity
(the number of new or under-represented features in an area relative to those that had already been selected). Because these
algorithms were heuristic, they were not guaranteed to produce optimal (most “efficient”) solutions. However, complementarity
came to be seen as a principle rather than a rule in an algorithm and its use was also advocated for the former reason. Optimal
solutions could be produced by reformulating the reserve selection problem in a mathematical programming formalism and using
exact algorithms developed in that context. A dispute over the relevance of full optimality arose and was never resolved.
Moreover, exact algorithms could not easily incorporate criteria determining the spatial configuration of networks of selected
areas, in contrast to heuristic algorithms. Meanwhile metaheuristic algorithms emerged in the 1990s and came to be seen as
a credible more effective alternative to the heuristic algorithms. Ultimately what was important about these developments
was that the reserve selection problem came to be viewed a complex optimal decision problem under uncertainty, resource, and
other constraints. It was a type of problem that had no antecedent in traditional ecology. 相似文献