共查询到20条相似文献,搜索用时 0 毫秒
1.
Inferentialists about scientific representation hold that an apparatus's representing a target system consists in the apparatus allowing “surrogative inferences” about the target. I argue that a serious problem for inferentialism arises from the fact that many scientific theories and models contain internal inconsistencies. Inferentialism, left unamended, implies that inconsistent scientific models have unlimited representational power, since an inconsistency permits any conclusion to be inferred. I consider a number of ways that inferentialists can respond to this challenge before suggesting my own solution. I develop an analogy to exploitable glitches in a game. Even though inconsistent representational apparatuses may in some sense allow for contradictions to be generated within them, doing so violates the intended function of the apparatus's parts and hence violates representational “gameplay”. 相似文献
2.
It is widely recognized that scientific theories are often associated with strictly inconsistent models, but there is little agreement concerning the epistemic consequences. Some argue that model inconsistency supports a strong perspectivism, according to which claims serving as interpretations of models are inevitably and irreducibly perspectival. Others argue that in at least some cases, inconsistent models can be unified as approximations to a theory with which they are associated, thus undermining this kind of perspectivism. I examine the arguments for perspectivism, and contend that its strong form is defeasible in principle, not merely in special cases. The argument rests on the plausibility of scientific knowledge concerning non-perspectival, dispositional facts about modelled systems. This forms the basis of a novel suggestion regarding how to understand the knowledge these models afford, in terms of a contrastive theory of what-questions. 相似文献
3.
Microbial model systems have a long history of fruitful use in fields that include evolution and ecology. In order to develop further insight into modelling practice, we examine how the competitive exclusion and coexistence of competing species have been modelled mathematically and materially over the course of a long research history. In particular, we investigate how microbial models of these dynamics interact with mathematical or computational models of the same phenomena. Our cases illuminate the ways in which microbial systems and equations work as models, and what happens when they generate inconsistent findings about shared targets. We reveal an iterative strategy of comparative modelling in different media, and suggest reasons why microbial models have a special degree of epistemic tractability in multimodel inquiry. 相似文献
4.
A distinction is made between theory-driven and phenomenological models. It is argued that phenomenological models are significant means by which theory is applied to phenomena. They act both as sources of knowledge of their target systems and are explanatory of the behaviors of the latter. A version of the shell-model of nuclear structure is analyzed and it is explained why such a model cannot be understood as being subsumed under the theory structure of Quantum Mechanics. Thus its representational capacity does not stem from its close link to theory. It is shown that the shell model yields knowledge about the target and is explanatory of certain behaviors of nuclei. Aspects of the process by which the shell model acquires its representational capacity are analyzed. It is argued that these point to the conclusion that the representational status of the model is a function of its capacity to function as a source of knowledge and its capacity to postulate and explain underlying mechanisms that give rise to the observed behavior of its target. 相似文献
5.
Models such as the simple pendulum, isolated populations, and perfectly rational agents, play a central role in theorising. It is now widely acknowledged that a study of scientific representation should focus on the role of such imaginary entities in scientists’ reasoning. However, the question is most of the time cast as follows: How can fictional or abstract entities represent the phenomena? In this paper, I show that this question is not well posed. First, I clarify the notion of representation, and I emphasise the importance of what I call the “format” of a representation for the inferences agents can draw from it. Then, I show that the very same model can be presented under different formats, which do not enable scientists to perform the same inferences. Assuming that the main function of a representation is to allow one to draw predictions and explanations of the phenomena by reasoning with it, I conclude that imaginary models in abstracto are not used as representations: scientists always reason with formatted representations. Therefore, the problem of scientific representation does not lie in the relationship of imaginary entities with real systems. One should rather focus on the variety of the formats that are used in scientific practice. 相似文献
6.
During the 1930s and 1940s, American physical organic chemists employed electronic theories of reaction mechanisms to construct models offering explanations of organic reactions. But two molecular rearrangements presented enormous challenges to model construction. The Claisen and Cope rearrangements were predominantly inaccessible to experimental investigation and they confounded explanation in theoretical terms. Drawing on the idea that models can be autonomous agents in the production of scientific knowledge, I argue that one group of models in particular were functionally autonomous from the Hughes–Ingold theory. Cope and Hardy’s models of the Claisen and Cope rearrangements were resources for the exploration of the Hughes–Ingold theory that otherwise lacked explanatory power. By generating ‘how-possibly’ explanations, these models explained how these rearrangements could happen rather than why they did happen. Furthermore, although these models were apparently closely connected to theory in terms of their construction, I argue that partial autonomy issued in extra-logical factors concerning the attitudes of American chemists to the Hughes–Ingold theory. And in the absence of a complete theoretical hegemony, a degree of consensus was reached concerning modelling the Claisen rearrangement mechanism. 相似文献
7.
Though it is held that some models in science have explanatory value, there is no conclusive agreement on what provides them with this value. One common view is that models have explanatory value vis-à-vis some target systems because they are developed using an abstraction process (i.e., a process which involves omitting features). Though I think this is correct, I believe it is not the whole picture. In this paper, I argue that, in addition to the well-known process of abstraction understood as an omission of features or information, there is also a family of abstraction processes that involve aggregation of features or information and that these processes play an important role in endowing the models they are used to build with explanatory value. After offering a taxonomy of abstraction processes involving aggregation, I show by considering in detail several models drawn from different sciences that the abstraction processes involving aggregation that are used to build these models are responsible (at least partially) for their having explanatory value. 相似文献
8.
From sunspots to the Southern Oscillation: confirming models of large-scale phenomena in meteorology
Christopher Pincock 《Studies in history and philosophy of science》2009,40(1):45-56
The epistemic problem of assessing the support that some evidence confers on a hypothesis is considered using an extended example from the history of meteorology. In this case, and presumably in others, the problem is to develop techniques of data analysis that will link the sort of evidence that can be collected to hypotheses of interest. This problem is solved by applying mathematical tools to structure the data and connect them to the competing hypotheses. I conclude that mathematical innovations provide crucial epistemic links between evidence and theories precisely because the evidence and theories are mathematically described. 相似文献
9.
Well-known epistemologies of science have implications for how best to understand knowledge transfer (KT). Yet, to date, no serious attempt has been made to explicate these particular implications. This paper infers views about KT from two popular epistemologies; what we characterize as incommensurabilitist views (after Devitt, 2001; Bird, 2002, 2008; Sankey and Hoyningen-Huene 2013) and voluntarist views (after Van Fraassen, 1984; Dupré, 2001; Chakravartty, 2015). We argue views of the former sort define the methodological, ontological, and social conditions under which research operates within ‘different worlds’ (to use Kuhn's expression), and entail that genuine KTs under those conditions should be difficult or even impossible. By contrast, more liberal voluntarist views recognize epistemological processes that allow for transfers across different sciences even under such conditions. After outlining these antithetical positions, we identify two kinds of KTs present in well-known episodes in the history of ecology—specifically, successful model transfers from chemical kinetics and thermodynamics into areas of ecological research—which reveal significant limitations of incommensurabilitist views. We conclude by discussing how the selected examples support a pluralistic voluntarism regarding KT. 相似文献
10.
In this article, I will view realist and non-realist accounts of scientific models within the larger context of the cultural significance of scientific knowledge. I begin by looking at the historical context and origins of the problem of scientific realism, and claim that it is originally of cultural and not only philosophical, significance. The cultural significance of debates on the epistemological status of scientific models is then related to the question of ‘intelligibility’ and how science, through models, can give us knowledge of the world by presenting us with an ‘intelligible account/picture of the world’, thus fulfilling its cultural-epistemic role. Realists typically assert that science can perform this role, while non-realists deny this. The various strategies adopted by realists and non-realists in making good their respective claims, is then traced to their cultural motivations. Finally I discuss the cultural implications of adopting realist or non-realist views of models through a discussion of the views of Rorty, Gellner, Van Fraassen and Clifford Hooker on the cultural significance of scientific knowledge. 相似文献
11.
12.
In the last decade much has been made of the role that models play in the epistemology of measurement. Specifically, philosophers have been interested in the role of models in producing measurement outcomes. This discussion has proceeded largely within the context of the physical sciences, with notable exceptions considering measurement in economics. However, models also play a central role in the methods used to develop instruments that purport to quantify psychological phenomena. These methods fall under the umbrella term ‘psychometrics’. In this paper, we focus on Clinical Outcome Assessments (COAs) and discuss two measurement theories and their associated models: Classical Test Theory (CTT) and Rasch Measurement Theory. We argue that models have an important role to play in coordinating theoretical terms with empirical content, but to do so they must serve: 1) as a representation of the measurement interaction; and 2) in conjunction with a theory of the attribute in which we are interested. We conclude that Rasch Measurement Theory is a more promising approach than CTT in these regards despite the latter's popularity with health outcomes researchers. 相似文献
13.
14.
15.
This paper defends the deflationary character of two recent views regarding scientific representation, namely RIG Hughes' DDI model and the inferential conception. It is first argued that these views' deflationism is akin to the homonymous position in discussions regarding the nature of truth. There, we are invited to consider the platitudes that the predicate “true” obeys at the level of practice, disregarding any deeper, or more substantive, account of its nature. More generally, for any concept X, a deflationary approach is then defined in opposition to a substantive approach, where a substantive approach to X is an analysis of X in terms of some property P, or relation R, accounting for and explaining the standard use of X. It then becomes possible to characterize a deflationary view of scientific representation in three distinct senses, namely: a “no-theory” view, a “minimalist” view, and a “use-based” view—in line with three standard deflationary responses in the philosophical literature on truth. It is then argued that both the DDI model and the inferential conception may be suitably understood in any of these three different senses. The application of these deflationary ‘hermeneutics’ moreover yields significant improvements on the DDI model, which bring it closer to the inferential conception. It is finally argued that what these approaches have in common—the key to any deflationary account of scientific representation—is the denial that scientific representation may be ultimately reduced to any substantive explanatory property of sources, or targets, or their relations. 相似文献
16.
The “universality” of critical phenomena is much discussed in philosophy of scientific explanation, idealizations and philosophy of physics. Lange and Reutlinger recently opposed Batterman concerning the role of some deliberate distortions in unifying a large class of phenomena, regardless of microscopic constitution. They argue for an essential explanatory role for “commonalities” rather than that of idealizations. Building on Batterman's insight, this article aims to show that assessing the differences between the universality of critical phenomena and two paradigmatic cases of “commonality strategy”—the ideal gas model and the harmonic oscillator model—is necessary to avoid the objections raised by Lange and Reutlinger. Taking these universal explanations as benchmarks for critical phenomena reveals the importance of the different roles played by analogies underlying the use of the models. A special combination of physical and formal analogies allows one to explain the epistemic autonomy of the universality of critical phenomena through an explicative loop. 相似文献
17.
How can false models be explanatory? And how can they help us to understand the way the world works? Sometimes scientists have little hope of building models that approximate the world they observe. Even in such cases, I argue, the models they build can have explanatory import. The basic idea is that scientists provide causal explanations of why the regularity entailed by an abstract and idealized model fails to obtain. They do so by relaxing some of its unrealistic assumptions. This method of ‘explanation by relaxation’ captures the explanatory import of some important models in economics. I contrast this method with the accounts that Daniel Hausman and Nancy Cartwright have provided of explanation in economics. Their accounts are unsatisfactory because they require that the economic model regularities obtain, which is rarely the case. I go on to argue that counterfactual regularities play a central role in achieving ‘understanding by relaxation.’ This has a surprising implication for the relation between explanation and understanding: Achieving scientific understanding does not require the ability to explain observed regularities. 相似文献
18.
Many disciplines and scientific fields have undergone a computational turn in the past several decades. This paper analyzes this sort of turn by investigating the case of computational quantum chemistry. The main claim is that the transformation from quantum to computational quantum chemistry involved changes in three dimensions. First, on the side of instrumentation, small computers and a networked infrastructure took over the lead from centralized mainframe architecture. Second, a new conception of computational modeling became feasible and assumed a crucial role. And third, the field of computational quantum chemistry became organized in a market-like fashion and this market is much bigger than the number of quantum theory experts. These claims will be substantiated by an investigation of the so-called density functional theory (DFT), the arguably pivotal theory in the turn to computational quantum chemistry around 1990. 相似文献
19.
Lydia Patton 《Studies in history and philosophy of science》2009,40(3):281-289
The Marburg neo-Kantians argue that Hermann von Helmholtz’s empiricist account of the a priori does not account for certain knowledge, since it is based on a psychological phenomenon, trust in the regularities of nature. They argue that Helmholtz’s account raises the ‘problem of validity’ (Gültigkeitsproblem): how to establish a warranted claim that observed regularities are based on actual relations. I reconstruct Heinrich Hertz’s and Ludwig Wittgenstein’s Bild theoretic answer to the problem of validity: that scientists and philosophers can depict the necessary a priori constraints on states of affairs in a given system, and can establish whether these relations are actual relations in nature. The analysis of necessity within a system is a lasting contribution of the Bild theory. However, Hertz and Wittgenstein argue that the logical and mathematical sentences of a Bild are rules, tools for constructing relations, and the rules themselves are meaningless outside the theory. Carnap revises the argument for validity by attempting to give semantic rules for translation between frameworks. Russell and Quine object that pragmatics better accounts for the role of a priori reasoning in translating between frameworks. The conclusion of the tale, then, is a partial vindication of Helmholtz’s original account. 相似文献
20.
C. A. Gearhart 《Archive for History of Exact Sciences》1985,32(3-4):207-222
A theoretical analysis of the potential accuracy of early modern planetary models employing compound circles suggests that fairly simple extensions of those models can be sufficiently accurate to meet the demands of Tycho Brahe's observations in both ecliptic longitude and latitude. Some of these extensions, such as the substitution of the true sun for the mean sun, had already been taken by Kepler before he abandoned circular models. Other extensions, involving one or two extra epicycles, were well within the mathematical capabilities of sixteenth-century and seventeenth-century astronomers. Hence neither the failure of astronomers before Kepler to correct errors in planetary positions nor Kepler's decision to abandon circular models was a consequence of inherent limitations in those models. 相似文献