首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The recent discussion on scientific representation has focused on models and their relationship to the real world. It has been assumed that models give us knowledge because they represent their supposed real target systems. However, here agreement among philosophers of science has tended to end as they have presented widely different views on how representation should be understood. I will argue that the traditional representational approach is too limiting as regards the epistemic value of modelling given the focus on the relationship between a single model and its supposed target system, and the neglect of the actual representational means with which scientists construct models. I therefore suggest an alternative account of models as epistemic tools. This amounts to regarding them as concrete artefacts that are built by specific representational means and are constrained by their design in such a way that they facilitate the study of certain scientific questions, and learning from them by means of construction and manipulation.  相似文献   

2.
The notion of template has been advocated by Paul Humphreys and others as an illuminating unit of analysis in the philosophy of scientific modelling. Templates are supposed to have the dual functions of representing target systems and of facilitating quantitative manipulation. A resulting worry is that wide-ranging cross-disciplinary use of templates might compromise their representational function and reduce them to mere formalisms. In this paper, we argue that templates are valuable units of analysis in reconstructing cross-disciplinary modelling. Central to our discussion are the ways in which Lotka-Volterra models are used to analyse processes of technology diffusion. We illuminate both the similarities and differences between contributions to this case of cross-disciplinary modelling by reconstructing them as transfer of a template, without reducing the template to a mere formalism or a computational model. This requires differentiating the interpretation of templates from that of the models based on them. This differentiation allows us to claim that the LV models of technology diffusion that we review are the result of template transfer - conformist in some contributions, creative in others.  相似文献   

3.
In this paper I examine the relationship between historians, philosophers and sociologists of science, and indeed scientists themselves. I argue that (i) they co-habit a shared intellectual territory (science and its past); and (ii) they should be able to do so peacefully, and with mutual respect, even if they disagree radically about how to describe the methods and results of science. I then go on to explore some of the challenges to mutually respectful cohabitation between history, philosophy and sociology of science. I conclude by identifying a familiar kind of project in the philosophy of science which seeks to explore the worldview of a particular scientific discipline, and argue that it too has a right to explore the shared territory even though some historians and sociologists may find it methodologically suspect.  相似文献   

4.
We examine the diversity of strategies of modelling networks in (micro) economics and (analytical) sociology. Field-specific conceptions of what explaining (with) networks amounts to or systematic preference for certain kinds of explanatory factors are not sufficient to account for differences in modelling methodologies. We argue that network models in both sociology and economics are abstract models of network mechanisms and that differences in their modelling strategies derive to a large extent from field-specific conceptions of the way in which a good model should be a general one. Whereas the economics models aim at unification, the sociological models aim at a set of mechanism schemas that are extrapolatable to the extent that the underlying psychological mechanisms are general. These conceptions of generality induce specific biases in mechanistic explanation and are related to different views of when knowledge from different fields should be seen as relevant.  相似文献   

5.
“Colligation”, a term first introduced in philosophy of science by William Whewell (1840), today sparks a renewed interest beyond Whewell scholarship. In this paper, we argue that adopting the notion of colligation in current debates in philosophy of science can contribute to our understanding of scientific models. Specifically, studying colligation allows us to have a better grasp of how integrating diverse model components (empirical data, theory, useful idealization, visual and other representational resources) in a creative way may produce novel generalizations about the phenomenon investigated. Our argument is built both on the theoretical appraisal of Whewell’s philosophy of science and the historical rehabilitation of his scientific work on tides. Adopting a philosophy of science in practice perspective, we show how colligation emerged from Whewell’s empirical work on tides. The production of idealized maps (“cotidal maps”) illustrates the unifying and creative power of the activity of colligating in scientific practice. We show the importance of colligation in modelling practices more generally by looking at its epistemic role in the construction of the San Francisco Bay Model.  相似文献   

6.
The demise of the Superconducting Supercollider (SSC) is often explained in terms of the strain that it placed on the federal budget of the United States, and change in national security interests with the end of the Cold War. Recent work by Steve Fuller provides a framework to re-examine this episode in epistemological terms using the work of Kuhn and Popper. Using this framework, it is tempting to explain the demise as resulting from the overly Kuhnian character of its proponents, who supposedly argued for its construction by appealing to the importance of testing the predictions of a specific paradigm (i.e. the Standard Model). On this reading, the SSC case appears as an example of how Kuhn’s paradigm-driven view of science was invoked to keep science closed and autonomous from society. I argue that the SSC episode should not be viewed as giving support to the displacement of Kuhn’s view of science for Popper’s, and that such a displacement is detrimental to the project of integrating discussion on science into the public sphere. Drawing upon Rouse and Wimsatt, I argue that understanding paradigms as practices blunts some criticisms against Kuhn’s model, and that his model should play an important epistemological role in the aforementioned project.  相似文献   

7.
I bring out the limitations of four important views of what the target of useful climate model assessment is. Three of these views are drawn from philosophy. They include the views of Elisabeth Lloyd and Wendy Parker, and an application of Bayesian confirmation theory. The fourth view I criticise is based on the actual practice of climate model assessment. In bringing out the limitations of these four views, I argue that an approach to climate model assessment that neither demands too much of such assessment nor threatens to be unreliable will, in typical cases, have to aim at something other than the confirmation of claims about how the climate system actually is. This means, I suggest, that the Intergovernmental Panel on Climate Change’s (IPCC׳s) focus on establishing confidence in climate model explanations and predictions is misguided. So too, it means that standard epistemologies of science with pretensions to generality, e.g., Bayesian epistemologies, fail to illuminate the assessment of climate models. I go on to outline a view that neither demands too much nor threatens to be unreliable, a view according to which useful climate model assessment typically aims to show that certain climatic scenarios are real possibilities and, when the scenarios are determined to be real possibilities, partially to determine how remote they are.  相似文献   

8.
To study climate change, scientists employ computer models, which approximate target systems with various levels of skill. Given the imperfection of climate models, how do scientists use simulations to generate knowledge about the causes of observed climate change? Addressing a similar question in the context of biological modelling, Levins (1966) proposed an account grounded in robustness analysis. Recent philosophical discussions dispute the confirmatory power of robustness, raising the question of how the results of computer modelling studies contribute to the body of evidence supporting hypotheses about climate change. Expanding on Staley’s (2004) distinction between evidential strength and security, and Lloyd’s (2015) argument connecting variety-of-evidence inferences and robustness analysis, I address this question with respect to recent challenges to the epistemology robustness analysis. Applying this epistemology to case studies of climate change, I argue that, despite imperfections in climate models, and epistemic constraints on variety-of-evidence reasoning and robustness analysis, this framework accounts for the strength and security of evidence supporting climatological inferences, including the finding that global warming is occurring and its primary causes are anthropogenic.  相似文献   

9.
During the 1930s and 1940s, American physical organic chemists employed electronic theories of reaction mechanisms to construct models offering explanations of organic reactions. But two molecular rearrangements presented enormous challenges to model construction. The Claisen and Cope rearrangements were predominantly inaccessible to experimental investigation and they confounded explanation in theoretical terms. Drawing on the idea that models can be autonomous agents in the production of scientific knowledge, I argue that one group of models in particular were functionally autonomous from the Hughes–Ingold theory. Cope and Hardy’s models of the Claisen and Cope rearrangements were resources for the exploration of the Hughes–Ingold theory that otherwise lacked explanatory power. By generating ‘how-possibly’ explanations, these models explained how these rearrangements could happen rather than why they did happen. Furthermore, although these models were apparently closely connected to theory in terms of their construction, I argue that partial autonomy issued in extra-logical factors concerning the attitudes of American chemists to the Hughes–Ingold theory. And in the absence of a complete theoretical hegemony, a degree of consensus was reached concerning modelling the Claisen rearrangement mechanism.  相似文献   

10.
Though it is held that some models in science have explanatory value, there is no conclusive agreement on what provides them with this value. One common view is that models have explanatory value vis-à-vis some target systems because they are developed using an abstraction process (i.e., a process which involves omitting features). Though I think this is correct, I believe it is not the whole picture. In this paper, I argue that, in addition to the well-known process of abstraction understood as an omission of features or information, there is also a family of abstraction processes that involve aggregation of features or information and that these processes play an important role in endowing the models they are used to build with explanatory value. After offering a taxonomy of abstraction processes involving aggregation, I show by considering in detail several models drawn from different sciences that the abstraction processes involving aggregation that are used to build these models are responsible (at least partially) for their having explanatory value.  相似文献   

11.
Model organisms are at once scientific models and concrete living things. It is widely assumed by philosophers of science that (1) model organisms function much like other kinds of models, and (2) that insofar as their scientific role is distinctive, it is in virtue of representing a wide range of biological species and providing a basis for generalizations about those targets. This paper uses the case of human embryonic stem cells (hESC) to challenge both assumptions. I first argue that hESC can be considered model organisms, analogous to classic examples such as Escherichia coli and Drosophila melanogaster. I then discuss four contrasts between the epistemic role of hESC in practice, and the assumptions about model organisms noted above. These contrasts motivate an alternative view of model organisms as a network of systems related constructively and developmentally to one another. I conclude by relating this result to other accounts of model organisms in recent philosophy of science.  相似文献   

12.
Thomas Kuhn suggested that symbolic generalizations are applied to concrete systems by a process involving exemplars and analogical reasoning. Using the related concepts of theoretical and formal templates, I argue that the process of applying templates can in some cases be made explicit and that we do not need to rely on similarity relations and tacit knowledge. In so doing I show how some formal models can be transferred from one scientific field to another. Examples include scale-free networks, the Lotka-Volterra model from biology, and the Goodwin model in economics. I also argue that this explicit approach has advantages over the more psychologically oriented approach of Kuhn and explain the sense in which templates do and do not produce unification.  相似文献   

13.
Although the interdisciplinary nature of contemporary biological sciences has been addressed by philosophers, historians, and sociologists of science, the different ways in which engineering concepts and methods have been applied in biology have been somewhat neglected. We examine – using the mechanistic philosophy of science as an analytic springboard – the transfer of network methods from engineering to biology through the cases of two biology laboratories operating at the California Institute of Technology. The two laboratories study gene regulatory networks, but in remarkably different ways. The research strategy of the Davidson lab fits squarely into the traditional mechanist philosophy in its aim to decompose and reconstruct, in detail, gene regulatory networks of a chosen model organism. In contrast, the Elowitz lab constructs minimal models that do not attempt to represent any particular naturally evolved genetic circuits. Instead, it studies the principles of gene regulation through a template-based approach that is applicable to any kinds of networks, whether biological or not. We call for the mechanists to consider whether the latter approach can be accommodated by the mechanistic approach, and what kinds of modifications it would imply for the mechanistic paradigm of explanation, if it were to address modelling more generally.  相似文献   

14.
This article addresses knowledge transfer dynamics in agent-based computational social science. The goal of the text is twofold. First, it describes the tensions arising from the convergence of different disciplinary traditions in the emergence of this new area of study and, second, it shows how these tensions are dealt with through the articulation of distinctive practices of knowledge production and transmission. To achieve this goal, three major instances of knowledge transfer dynamics in agent-based computational social science are analysed. The first instance is the emergence of the research field. Relations of knowledge transfer and cross-fertilisation between agent-based computational social science and wider and more established disciplinary areas: complexity science, computational science and social science, are discussed. The second instance is the approach to scientific modelling in the field. It is shown how the practice of agent-based modelling is affected by the conflicting coexistence of shared methodological commitments transferred from both empirical and formal disciplines. Lastly, the third instance pertains internal practices of knowledge production and transmission. Through the discussion of these practices, the tensions arising from converging dissimilar disciplinary traditions in agent-based computational social science are highlighted.  相似文献   

15.
Recent philosophy of science has seen a number of attempts to understand scientific models by looking to theories of fiction. In previous work, I have offered an account of models that draws on Kendall Walton’s ‘make-believe’ theory of art. According to this account, models function as ‘props’ in games of make-believe, like children’s dolls or toy trucks. In this paper, I assess the make-believe view through an empirical study of molecular models. I suggest that the view gains support when we look at the way that these models are used and the attitude that users take towards them. Users’ interaction with molecular models suggests that they do imagine the models to be molecules, in much the same way that children imagine a doll to be a baby. Furthermore, I argue, users of molecular models imagine themselves viewing and manipulating molecules, just as children playing with a doll might imagine themselves looking at a baby or feeding it. Recognising this ‘participation’ in modelling, I suggest, points towards a new account of how models are used to learn about the world, and helps us to understand the value that scientists sometimes place on three-dimensional, physical models over other forms of representation.  相似文献   

16.
In climate science, climate models are one of the main tools for understanding phenomena. Here, we develop a framework to assess the fitness of a climate model for providing understanding. The framework is based on three dimensions: representational accuracy, representational depth, and graspability. We show that this framework does justice to the intuition that classical process-based climate models give understanding of phenomena. While simple climate models are characterized by a larger graspability, state-of-the-art models have a higher representational accuracy and representational depth. We then compare the fitness-for-providing understanding of process-based to data-driven models that are built with machine learning. We show that at first glance, data-driven models seem either unnecessary or inadequate for understanding. However, a case study from atmospheric research demonstrates that this is a false dilemma. Data-driven models can be useful tools for understanding, specifically for phenomena for which scientists can argue from the coherence of the models with background knowledge to their representational accuracy and for which the model complexity can be reduced such that they are graspable to a satisfactory extent.  相似文献   

17.
In this paper, I investigate the nature of empirical findings that provide evidence for the characterization of a scientific phenomenon, and the defeasible nature of this evidence. To do so, I explore an exemplary instance of the rejection of a characterization of a scientific phenomenon: memory transfer. I examine the reason why the characterization of memory transfer was rejected, and analyze how this rejection tied to researchers’ failures to resolve experimental issues relating to replication and confounds. I criticize the presentation of the case by Harry Collins and Trevor Pinch, who claim that no sufficient reason was provided to abandon research on memory transfer. I argue that skeptics about memory transfer adopted what I call a defeater strategy, in which researchers exploit the defeasibility of the evidence for a characterization of a phenomenon.  相似文献   

18.
Current controversies about knowledge integration reflect conflicting ideas of what it means to “take Indigenous knowledge seriously”. While there is increased interest in integrating Indigenous and Western scientific knowledge in various disciplines such as anthropology and ethnobiology, integration projects are often accused of recognizing Indigenous knowledge only insofar as it is useful for Western scientists. The aim of this article is to use tools from philosophy of science to develop a model of both successful integration and integration failures. On the one hand, I argue that cross-cultural recognition of property clusters leads to an ontological overlap that makes knowledge integration often epistemically productive and socially useful. On the other hand, I argue that knowledge integration is limited by ontological divergence. Adequate models of Indigenous knowledge will therefore have to take integration failures seriously and I argue that integration efforts need to be complemented by a political notion of ontological self-determination.  相似文献   

19.
20.
Over the last decades, science has grown increasingly collaborative and interdisciplinary and has come to depart in important ways from the classical analyses of the development of science that were developed by historically inclined philosophers of science half a century ago. In this paper, I shall provide a new account of the structure and development of contemporary science based on analyses of, first, cognitive resources and their relations to domains, and second of the distribution of cognitive resources among collaborators and the epistemic dependence that this distribution implies. On this background I shall describe different ideal types of research activities and analyze how they differ. Finally, analyzing values that drive science towards different kinds of research activities, I shall sketch the main mechanisms underlying the perceived tension between disciplines and interdisciplinarity and argue for a redefinition of accountability and quality control for interdisciplinary and collaborative science.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号