共查询到20条相似文献,搜索用时 0 毫秒
1.
Elly Dekker 《Annals of science》2013,70(6):541-566
The impact on globe making of the change from a Ptolemaic to a Copernican world-view is examined. As well as showing a map of the Earth and the Heavens, the main use of globes originally was to demonstrate the natural phenomena as these are observed from a geocentric perspective. In the second half of the eighteenth century some belated attempts were made to construct so-called Copernican globes for this purpose. This late response did not stop the production and use of the common Ptolemaic globe. It is argued that the technological developments of the nineteenth century made the role of the globe as a demonstration model superfluous and thus contributed more to the downfall of the common Ptolemaic globe than did any revolution in science. 相似文献
2.
3.
4.
Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computational quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990. 相似文献
5.
6.
George Abraham 《Archive for History of Exact Sciences》1984,30(1):1-6
B. L. van der Waerden's thesis, that the motion of Mars in the Stobart Tables has been computed by linear methods based on the assumption of a piece-wise constant velocity, is discussed in this paper, and some further evidence presented. 相似文献
7.
8.
In Ancient Greek two models were proposed for explaining the planetary motion: the homocentric spheres of Eudoxus and the Epicycle and Deferent System. At least in a qualitative way, both models could explain the retrograde motion, the most challenging phenomenon to be explained using circular motions. Nevertheless, there is another explanandum: during retrograde motion the planets increase their brightness. It is natural to interpret a change of brightness, i.e., of apparent size, as a change in distance. Now, while according to the Eudoxian model the planet is always equidistant from the earth, according to the epicycle and deferent system, the planet changes its distance from the earth, approaching to it during retrograde motion, just as observed. So, it is usually affirmed that the main reason for the rejection of Eudoxus' homocentric spheres in favor of the epicycle and deferent system was that the first cannot explain the manifest planetary increase of brightness during retrograde motion, while the second can. In this paper I will show that this historical hypothesis is not as firmly founded as it is usually believed to be. 相似文献
9.
The paper examines philosophical issues that arise in contexts where one has many different models for treating the same system. I show why in some cases this appears relatively unproblematic (models of turbulence) while others represent genuine difficulties when attempting to interpret the information that models provide (nuclear models). What the examples show is that while complementary models needn’t be a hindrance to knowledge acquisition, the kind of inconsistency present in nuclear cases is, since it is indicative of a lack of genuine theoretical understanding. It is important to note that the differences in modeling do not result directly from the status of our knowledge of turbulent flows as opposed to nuclear dynamics—both face fundamental theoretical problems in the construction and application of models. However, as we shall, the ‘problem context(s)’ in which the modeling takes plays a decisive role in evaluating the epistemic merit of the models themselves. Moreover, the theoretical difficulties that give rise to inconsistent as opposed to complementary models (in the cases I discuss) impose epistemic and methodological burdens that cannot be overcome by invoking philosophical strategies like perspectivism, paraconsistency or partial structures. 相似文献
10.
Microbial model systems have a long history of fruitful use in fields that include evolution and ecology. In order to develop further insight into modelling practice, we examine how the competitive exclusion and coexistence of competing species have been modelled mathematically and materially over the course of a long research history. In particular, we investigate how microbial models of these dynamics interact with mathematical or computational models of the same phenomena. Our cases illuminate the ways in which microbial systems and equations work as models, and what happens when they generate inconsistent findings about shared targets. We reveal an iterative strategy of comparative modelling in different media, and suggest reasons why microbial models have a special degree of epistemic tractability in multimodel inquiry. 相似文献
11.
12.
How can false models be explanatory? And how can they help us to understand the way the world works? Sometimes scientists have little hope of building models that approximate the world they observe. Even in such cases, I argue, the models they build can have explanatory import. The basic idea is that scientists provide causal explanations of why the regularity entailed by an abstract and idealized model fails to obtain. They do so by relaxing some of its unrealistic assumptions. This method of ‘explanation by relaxation’ captures the explanatory import of some important models in economics. I contrast this method with the accounts that Daniel Hausman and Nancy Cartwright have provided of explanation in economics. Their accounts are unsatisfactory because they require that the economic model regularities obtain, which is rarely the case. I go on to argue that counterfactual regularities play a central role in achieving ‘understanding by relaxation.’ This has a surprising implication for the relation between explanation and understanding: Achieving scientific understanding does not require the ability to explain observed regularities. 相似文献
13.
It is widely recognized that scientific theories are often associated with strictly inconsistent models, but there is little agreement concerning the epistemic consequences. Some argue that model inconsistency supports a strong perspectivism, according to which claims serving as interpretations of models are inevitably and irreducibly perspectival. Others argue that in at least some cases, inconsistent models can be unified as approximations to a theory with which they are associated, thus undermining this kind of perspectivism. I examine the arguments for perspectivism, and contend that its strong form is defeasible in principle, not merely in special cases. The argument rests on the plausibility of scientific knowledge concerning non-perspectival, dispositional facts about modelled systems. This forms the basis of a novel suggestion regarding how to understand the knowledge these models afford, in terms of a contrastive theory of what-questions. 相似文献
14.
Lydia Patton 《Studies in history and philosophy of science》2009,40(3):281-289
The Marburg neo-Kantians argue that Hermann von Helmholtz’s empiricist account of the a priori does not account for certain knowledge, since it is based on a psychological phenomenon, trust in the regularities of nature. They argue that Helmholtz’s account raises the ‘problem of validity’ (Gültigkeitsproblem): how to establish a warranted claim that observed regularities are based on actual relations. I reconstruct Heinrich Hertz’s and Ludwig Wittgenstein’s Bild theoretic answer to the problem of validity: that scientists and philosophers can depict the necessary a priori constraints on states of affairs in a given system, and can establish whether these relations are actual relations in nature. The analysis of necessity within a system is a lasting contribution of the Bild theory. However, Hertz and Wittgenstein argue that the logical and mathematical sentences of a Bild are rules, tools for constructing relations, and the rules themselves are meaningless outside the theory. Carnap revises the argument for validity by attempting to give semantic rules for translation between frameworks. Russell and Quine object that pragmatics better accounts for the role of a priori reasoning in translating between frameworks. The conclusion of the tale, then, is a partial vindication of Helmholtz’s original account. 相似文献
15.
H Bedouelle M Hofnung 《Comptes rendus des séances de l'Académie des sciences. Série D, Sciences naturelles》1978,287(9):891-894
The concepts of validation, evaluation and predictive values of short term tests for carcinogenicity are rigorously defined. The relationships between the parameters measuring these concepts are established. This allows and estimation of the practical efficiency of Ames Salmonella test (1). A new strategy for the evaluation of tests is proposed. 相似文献
16.
17.
This paper deals with the economic interpretation of the unobserved components model in the light of the apparent problem posed by previous work in that several practiced methodologies seem to lead to very different models of certain economic variables. A detailed empirical analysis is carried out to show how the failure in obtaining quasi-orthogonal components can seriously bias the interpretation of some decomposition procedures. Finally, the forecasting performance (in both the short and long run) of these decomposition models is analyzed in comparison with other alternatives. 相似文献
18.
Empirical studies in the area of sovereign debt have used statistical models singularly to predict the probability of debt rescheduling. Unfortunately, researchers have made few efforts to test the reliability of these model predictions or to identify a superior prediction model among competing models. This paper tested neural network, OLS, and logit models' predictive abilities regarding debt rescheduling of less developed countries (LDC). All models predicted well out‐of‐sample. The results demonstrated a consistent performance of all models, indicating that researchers and practitioners can rely on neural networks or on the traditional statistical models to give useful predictions. Copyright © 2001 John Wiley & Sons, Ltd. 相似文献
19.
Existing scholarship on animal models tends to foreground either of the two major roles research organisms play in different epistemic contexts, treating their representational and instrumental roles separately. Based on an empirical case study, this article explores the changing relationship between the two epistemic roles of a research organism over the span of a decade, while the organism was used to achieve various knowledge ends. This rat model was originally intended as a replica of human susceptibility to cardiac arrest. In a fortunate stroke of serendipity, however, the experimenters detected the way mother-infant interactions regulated the pups’ resting cardiac rate. This intriguing outcome thus became the model’s new representational target and began driving the development of an experimental system. Henceforth, the model acquired an instrumental function, serving to detect and measure system-specific differences. Its subsequent development involved creating stimulus-response measures to explain and theorize those differences. It was this instrumental use of the model that pushed the experimenters into unchartered territory and conferred to the model an ability to adapt to varied epistemic contexts. Despite the prominence of this instrumental role, however, the model’s representational power continued to guide research. The model’s representational target was widened beyond heart rate to reflect other functional phenomena, such as behavioral activity and sleep/wake rhythm. The rat model was thus transformed from an experimental organism designed to instantiate cardiac regulation to a model organism taken to represent the development of a whole, intact animal under the regulatory influence of maternal care. This article examines this multifaceted transformation within the context of the salient shifts in modeling practice and variations in the model’s representational power. It thus explores how the relationship between the representational and instrumental uses of the model changed with respect to the varying exigencies of the investigative context, foregrounding its contextual versatility. 相似文献
20.
Though it is held that some models in science have explanatory value, there is no conclusive agreement on what provides them with this value. One common view is that models have explanatory value vis-à-vis some target systems because they are developed using an abstraction process (i.e., a process which involves omitting features). Though I think this is correct, I believe it is not the whole picture. In this paper, I argue that, in addition to the well-known process of abstraction understood as an omission of features or information, there is also a family of abstraction processes that involve aggregation of features or information and that these processes play an important role in endowing the models they are used to build with explanatory value. After offering a taxonomy of abstraction processes involving aggregation, I show by considering in detail several models drawn from different sciences that the abstraction processes involving aggregation that are used to build these models are responsible (at least partially) for their having explanatory value. 相似文献