共查询到20条相似文献,搜索用时 0 毫秒
1.
I propose a distinct type of robustness, which I suggest can support a confirmatory role in scientific reasoning, contrary to the usual philosophical claims. In model robustness, repeated production of the empirically successful model prediction or retrodiction against a background of independently-supported and varying model constructions, within a group of models containing a shared causal factor, may suggest how confident we can be in the causal factor and predictions/retrodictions, especially once supported by a variety of evidence framework. I present climate models of greenhouse gas global warming of the 20th Century as an example, and emphasize climate scientists' discussions of robust models and causal aspects. The account is intended as applicable to a broad array of sciences that use complex modeling techniques. 相似文献
2.
Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. 相似文献
3.
The translation of a mathematical model into a numerical one employs various modifications in order to make the model accessible for computation. Such modifications include discretizations, approximations, heuristic assumptions, and other methods. The paper investigates the divergent styles of mathematical and numerical models in the case of a specific piece of code in a current atmospheric model. Cognizance of these modifications means that the question of the role and function of scientific models has to be reworked. Neither are numerical models pure intermediaries between theory and data, nor are they autonomous tools of inquiry. Instead, theory and data are transformed into a new symbolic form of research due to the fact that computation has become an essential requirement for every scientific practice. Therefore the question is posed: What do numerical (climate) models really represent? 相似文献
4.
Cities are not only major contributors to global climate change but also stand at the forefront of climate change impact. Quantifying and assessing the risk potentially induced by climate change has great significance for cities to undertake positive climate adaptation and risk prevention. However, most of the previous studies focus on global, national or regional dimensions, only a few have attempted to examine climate change risk at an urban scale and even less in the case of a recent literature review. As a result, a quantitative assessment of climate change risk for cities remains highly challenging. To fill this gap, the article makes a critical review of the recent literature on urban-scale climate change risk assessment, and classifies them into four major categories of studies which jointly constitute a stepwise modelling chain from global climate change towards urban-scale risk assessment. On this basis, the study summarizes the updated research progresses and discusses the major challenges to be overcome for the seamless coupling of climate simulation between different scales, the reproduction of compound climate events, the incorporation of non-market and long-lasting impacts and the representation of risk transmission insides or beyond a city. Furthermore, future directions to advance quantitative assessment of urban-scale climate change risk are highlighted, with fresh insights into improving study methodology, enriching knowledge of climate change impact on city, enhancing abundance and accessibility to data, and exploring the best practice to provide city-specific climate risk service. 相似文献
5.
Ankeny and Leonelli (2016) propose “repertoires” as a new way to understand the stability of certain research programs as well as scientific change in general. By bringing a more complete range of social, material, and epistemic elements into one framework, they position their work as a correction to the Kuhnian impulse in philosophy of science and other areas of science studies. I argue that this “post-Kuhnian” move is not complete, and that repertoires maintain an internalist perspective. Comparison with an alternative framework, the “sociotechnical imaginaries” of Jasanoff and Kim (2015), illustrates precisely which elements of practice are externalized by Ankeny and Leonelli. Specifically, repertoires discount the role of audience, without whom the repertoires of science are unintelligible, and lack an explicit place for ethical and political imagination, which provide meaning for otherwise mechanical promotion of particular research programs. This comparison reveals, I suggest, two distinct modes of scholarship, one internalist and the other critical. While repertoires can be modified to meet the needs of critical STS scholars and to completely reject Kuhn's internalism, whether or not we do so depends on what we want our scholarship to achieve. 相似文献
6.
Nicholaos Jones 《Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics》2009,40(2):124-132
General Relativity and the Standard Model often are touted as the most rigorously and extensively confirmed scientific hypotheses of all time. Nonetheless, these theories appear to have consequences that are inconsistent with evidence about phenomena for which, respectively, quantum effects and gravity matter. This paper suggests an explanation for why the theories are not disconfirmed by such evidence. The key to this explanation is an approach to scientific hypotheses that allows their actual content to differ from their apparent content. This approach does not appeal to ceteris-paribus qualifiers or counterfactuals or similarity relations. And it helps to explain why some highly idealized hypotheses are not treated in the way that a thoroughly refuted theory is treated but instead as hypotheses with limited domains of applicability. 相似文献
7.
Non-epistemic values pervade climate modelling, as is now well documented and widely discussed in the philosophy of climate science. Recently, Parker and Winsberg have drawn attention to what can be termed “epistemic inequality”: this is the risk that climate models might more accurately represent the future climates of the geographical regions prioritised by the values of the modellers. In this paper, we promote value management as a way of overcoming epistemic inequality. We argue that value management can be seriously considered as soon as the value-free ideal and inductive risk arguments commonly used to frame the discussions of value influence in climate science are replaced by alternative social accounts of objectivity. We consider objectivity in Longino's sense as well as strong objectivity in Harding's sense to be relevant options here, because they offer concrete proposals that can guide scientific practice in evaluating and designing so-called multi-model ensembles and, in fine, improve their capacity to quantify and express uncertainty in climate projections. 相似文献
8.
First, I argue that scientific progress is possible in the absence of increasing verisimilitude in science's theories. Second, I argue that increasing theoretical verisimilitude is not the central, or primary, dimension of scientific progress. Third, I defend my previous argument that unjustified changes in scientific belief may be progressive. Fourth, I illustrate how false beliefs can promote scientific progress in ways that cannot be explicated by appeal to verisimilitude. 相似文献
9.
According to the foundationalist picture, shared by many rationalists and positivist empiricists, science makes cognitive progress by accumulating justified truths. Fallibilists, who point out that complete certainty cannot be achieved in empirical science, can still argue that even successions of false theories may progress toward the truth. This proposal was supported by Karl Popper with his notion of truthlikeness or verisimilitude. Popper’s own technical definition failed, but the idea that scientific progress means increasing truthlikeness can be expressed by defining degrees of truthlikeness in terms of similarities between states of affairs. This paper defends the verisimilitude approach against Alexander Bird who argues that the “semantic” definition (in terms of truth or truthlikeness alone) is not sufficient to define progress, but the “epistemic” definition referring to justification and knowledge is more adequate. Here Bird ignores the crucial distinction between real progress and estimated progress, explicated by the difference between absolute (and usually unknown) degrees of truthlikeness and their evidence-relative expected values. Further, it is argued that Bird’s idea of returning to the cumulative model of growth requires an implausible trick of transforming past false theories into true ones. 相似文献
10.
To study climate change, scientists employ computer models, which approximate target systems with various levels of skill. Given the imperfection of climate models, how do scientists use simulations to generate knowledge about the causes of observed climate change? Addressing a similar question in the context of biological modelling, Levins (1966) proposed an account grounded in robustness analysis. Recent philosophical discussions dispute the confirmatory power of robustness, raising the question of how the results of computer modelling studies contribute to the body of evidence supporting hypotheses about climate change. Expanding on Staley’s (2004) distinction between evidential strength and security, and Lloyd’s (2015) argument connecting variety-of-evidence inferences and robustness analysis, I address this question with respect to recent challenges to the epistemology robustness analysis. Applying this epistemology to case studies of climate change, I argue that, despite imperfections in climate models, and epistemic constraints on variety-of-evidence reasoning and robustness analysis, this framework accounts for the strength and security of evidence supporting climatological inferences, including the finding that global warming is occurring and its primary causes are anthropogenic. 相似文献
11.
Calls for research on climate engineering have increased in the last two decades, but there remains widespread agreement that many climate engineering technologies (in particular, forms involving global solar radiation management) present significant ethical risks and require careful governance. However, proponents of research argue, ethical restrictions on climate engineering research should not be imposed in early-stage work like in silico modeling studies. Such studies, it is argued, do not pose risks to the public, and the knowledge gained from them is necessary for assessing the risks and benefits of climate engineering technologies. I argue that this position, which I call the “broad research-first” stance, cannot be maintained in light of the entrance of nonepistemic values in climate modeling. I analyze the roles that can be played by nonepistemic political and ethical values in the design, tuning, and interpretation of climate models. Then, I argue that, in the context of early-stage climate engineering research, the embeddedness of values will lead to value judgments that could harm stakeholder groups or impose researcher values on non-consenting populations. I conclude by calling for more robust reflection on the ethics and governance of early-stage climate engineering research. 相似文献
12.
The recent discussion on scientific representation has focused on models and their relationship to the real world. It has been assumed that models give us knowledge because they represent their supposed real target systems. However, here agreement among philosophers of science has tended to end as they have presented widely different views on how representation should be understood. I will argue that the traditional representational approach is too limiting as regards the epistemic value of modelling given the focus on the relationship between a single model and its supposed target system, and the neglect of the actual representational means with which scientists construct models. I therefore suggest an alternative account of models as epistemic tools. This amounts to regarding them as concrete artefacts that are built by specific representational means and are constrained by their design in such a way that they facilitate the study of certain scientific questions, and learning from them by means of construction and manipulation. 相似文献
13.
We propose a framework to describe, analyze, and explain the conditions under which scientific communities organize themselves to do research, particularly within large-scale, multidisciplinary projects. The framework centers on the notion of a research repertoire, which encompasses well-aligned assemblages of the skills, behaviors, and material, social, and epistemic components that a group may use to practice certain kinds of science, and whose enactment affects the methods and results of research. This account provides an alternative to the idea of Kuhnian paradigms for understanding scientific change in the following ways: (1) it does not frame change as primarily generated and shaped by theoretical developments, but rather takes account of administrative, material, technological, and institutional innovations that contribute to change and explicitly questions whether and how such innovations accompany, underpin, and/or undercut theoretical shifts; (2) it thus allows for tracking of the organization, continuity, and coherence in research practices which Kuhn characterized as ‘normal science’ without relying on the occurrence of paradigmatic shifts and revolutions to be able to identify relevant components; and (3) it requires particular attention be paid to the performative aspects of science, whose study Kuhn pioneered but which he did not extensively conceptualize. We provide a detailed characterization of repertoires and discuss their relationship with communities, disciplines, and other forms of collaborative activities within science, building on an analysis of historical episodes and contemporary developments in the life sciences, as well as cases drawn from social and historical studies of physics, psychology, and medicine. 相似文献
14.
In climate science, climate models are one of the main tools for understanding phenomena. Here, we develop a framework to assess the fitness of a climate model for providing understanding. The framework is based on three dimensions: representational accuracy, representational depth, and graspability. We show that this framework does justice to the intuition that classical process-based climate models give understanding of phenomena. While simple climate models are characterized by a larger graspability, state-of-the-art models have a higher representational accuracy and representational depth. We then compare the fitness-for-providing understanding of process-based to data-driven models that are built with machine learning. We show that at first glance, data-driven models seem either unnecessary or inadequate for understanding. However, a case study from atmospheric research demonstrates that this is a false dilemma. Data-driven models can be useful tools for understanding, specifically for phenomena for which scientists can argue from the coherence of the models with background knowledge to their representational accuracy and for which the model complexity can be reduced such that they are graspable to a satisfactory extent. 相似文献
15.
《Studies in history and philosophy of science》2013,44(4):643-651
For many years, scientific heritage has received attention from multiple actors from different spheres of activity—archives, museums, scientific institutions. Beyond the heterogeneity revealed when examining the place of scientific heritage in different places, an authentic patrimonial configuration emerges and takes the form of a nebula of claims and of accomplishments that result, in some cases, in institutional and political recognition at the national level, in various country all around the world. At the international level, the creation of the international committee dedicated to University Museums and Collections (UMAC) within the International Council of Museums (ICOM) certainly testified from this raising interest in academic heritage and the existence of a specific community concern with it.This article presents numerous initiatives for the preservation of scientific heritage in France, with the goal of analysing the relationship scientists have with their heritage. We argue that scientific communities have a special relationship with heritage, which is characterized by a number of ambiguities. We show that such ambivalences allow analysis of identity, discipline, professional, and social issues operative in defining heritage and being redefined by heritage. To explore these dimensions, we have chosen to present three different case studies. The first traces the institutional uses of heritage by a scientific institution, the Commissariat à l’énergie atomique (CEA), through the transformation of the first French atomic reactor (ZOE) into a museum. The second example describes the initiatives of French astronomers from the mid-1990s to construct a national programme for the protection of astronomy heritage. Lastly, we recount the case of universities, with the example of the Université de Strasbourg. 相似文献
17.
Edward J. Lusk 《Journal of forecasting》1983,2(1):77-83
A case is discussed where a failure to adequately criticize an ARIMA model led to erroneous inferences about the process underlying the data. A follow-up analysis, which permitted model criticism, suggested a different interpretation. The case is suggested for classroom presentation. 相似文献
18.
Recently, many historians of science have chosen to present their historical narratives from the ‘actors’-eye view’. Scientific knowledge not available within the actors’ culture is not permitted to do explanatory work. Proponents of the Sociology of Scientific Knowledge (SSK) purport to ground this historiography on epistemological relativism. I argue that they are making an unnecessary mistake: unnecessary because the historiographical genre in question can be defended on aesthetic and didactic grounds; and a mistake because the argument from relativism is in any case incoherent.The argument of the present article is self-contained, but steers clear of metaphysical debates in the philosophy of science. To allay fears of hidden assumptions, the sequel, to be published in the following issue, will consider SSK’s prospects of succour from scientific realism, instrumentalism, and a metaphysical system of Bruno Latour’s own devising. 相似文献
19.
José A. Díez 《Studies in history and philosophy of science》2011,42(1):105-116
It is generally accepted that Popper‘s degree of corroboration, though “inductivist” in a very general and weak sense, is not inductivist in a strong sense, i.e. when by ‘inductivism’ we mean the thesis that the right measure of evidential support has a probabilistic character. The aim of this paper is to challenge this common view by arguing that Popper can be regarded as an inductivist, not only in the weak broad sense but also in a narrower, probabilistic sense. In section 2, first, I begin by briefly characterizing the relevant notion of inductivism that is at stake here; second, I present and discuss the main Popperian argument against it and show that in the only reading in which the argument is formally it is restricted to cases of predicted evidence, and that even if restricted in this way the argument is formally valid it is nevertheless materially unsound. In section 3, I analyze the desiderata that, according to Popper, any acceptable measure for evidential support must satisfy, I clean away its ad-hoc components and show that all the remaining desiderata are satisfied by inductuvist-in-strict-sense measures. In section 4 I demonstrate that two of these desiderata, accepted by Popper, imply that in cases of predicted evidence any measure that satisfies them is qualitatively indistinguishable from conditional probability. Finally I defend that this amounts to a kind of strong inductivism that enters into conflict with Popper’s anti-inductivist argument and declarations, and that this conflict does not depend on the incremental versus non-incremental distinction for evidential-support measures, making Popper’s position inconsistent in any reading. 相似文献
20.
In his 1966 paper “The Strategy of model-building in Population Biology”, Richard Levins argues that no single model in population biology can be maximally realistic, precise and general at the same time. This is because these desirable model properties trade-off against one another. Recently, philosophers have developed Levins’ claims, arguing that trade-offs between these desiderata are generated by practical limitations on scientists, or due to formal aspects of models and how they represent the world. However this project is not complete. The trade-offs discussed by Levins had a noticeable effect on modelling in population biology, but not on other sciences. This raises questions regarding why such a difference holds. I claim that in order to explain this finding, we must pay due attention to the properties of the systems, or targets modelled by the different branches of science. 相似文献