首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.
I propose a distinct type of robustness, which I suggest can support a confirmatory role in scientific reasoning, contrary to the usual philosophical claims. In model robustness, repeated production of the empirically successful model prediction or retrodiction against a background of independently-supported and varying model constructions, within a group of models containing a shared causal factor, may suggest how confident we can be in the causal factor and predictions/retrodictions, especially once supported by a variety of evidence framework. I present climate models of greenhouse gas global warming of the 20th Century as an example, and emphasize climate scientists' discussions of robust models and causal aspects. The account is intended as applicable to a broad array of sciences that use complex modeling techniques.  相似文献   

2.
In previous works, I examine inferential methods employed in Probabilistic Weather Event Attribution studies (PEAs), and explored various ways they can be used to aid in climate policy decisions and decision-making about climate justice issues. This paper evaluates limitations of PEAs and considers how PEA researchers’ attributions of “liability” to specific countries for specific extreme weather events could be made more ethical. In sum, I show that it is routinely presupposed that PEA methods are not prone to inductive risks and presuppose that PEA researchers thus have no epistemic consequences or responsibilities for their attributions of liability. I argue that although PEAs are nevertheless crucially useful for practical decision-making, the attributions of liability made by PEA researchers are in fact prone to indicative risks and are influenced by non-epistemic values that PEA researchers should make transparent to make such studies more ethical. Finally, I outline possible normative approaches for making sciences, including PEAs, more ethical; and discuss implications of my arguments for the ongoing debate about how PEAs should guide climate policy and relevant legal decisions.  相似文献   

3.
Based on the disproportionate amount of attention paid by climate scientists to the supposed global warming hiatus, it has recently been argued that contrarian discourse has “seeped into” climate science. While I agree that seepage has occurred, its effects remain unclear. This lack of clarity may give the impression that climate science has been compromised in a way that it hasn't—such a conclusion should be defended against. To do this I argue that the effects of seepage should be analyzed in terms of objectivity. I use seven meanings of objectivity to analyze contrarian discourse's impact on climate science. The resulting account supports the important point that climate science has not been compromised in a way that invalidates the conclusions its scientists have drawn, despite the reality of seepage having occurred.  相似文献   

4.
Calls for research on climate engineering have increased in the last two decades, but there remains widespread agreement that many climate engineering technologies (in particular, forms involving global solar radiation management) present significant ethical risks and require careful governance. However, proponents of research argue, ethical restrictions on climate engineering research should not be imposed in early-stage work like in silico modeling studies. Such studies, it is argued, do not pose risks to the public, and the knowledge gained from them is necessary for assessing the risks and benefits of climate engineering technologies. I argue that this position, which I call the “broad research-first” stance, cannot be maintained in light of the entrance of nonepistemic values in climate modeling. I analyze the roles that can be played by nonepistemic political and ethical values in the design, tuning, and interpretation of climate models. Then, I argue that, in the context of early-stage climate engineering research, the embeddedness of values will lead to value judgments that could harm stakeholder groups or impose researcher values on non-consenting populations. I conclude by calling for more robust reflection on the ethics and governance of early-stage climate engineering research.  相似文献   

5.
Philip Kitcher's The Advancement of Science sets out, programmatically, a new naturalistic view of science as a process of building consensus practices. Detailed historical case studies—centrally, the Darwinian revolutio—are intended to support this view. I argue that Kitcher's expositions in fact support a more conservative view, that I dub ‘Legend Naturalism’. Using four historical examples which increasingly challenge Kitcher's discussions, I show that neither Legend Naturalism, nor the less conservative programmatic view, gives an adequate account of scientific progress. I argue for a naturalism that is more informed by psychology and a normative account that is both more social and less realist than the views articulated in The Advancement of Science.  相似文献   

6.
Non-epistemic values pervade climate modelling, as is now well documented and widely discussed in the philosophy of climate science. Recently, Parker and Winsberg have drawn attention to what can be termed “epistemic inequality”: this is the risk that climate models might more accurately represent the future climates of the geographical regions prioritised by the values of the modellers. In this paper, we promote value management as a way of overcoming epistemic inequality. We argue that value management can be seriously considered as soon as the value-free ideal and inductive risk arguments commonly used to frame the discussions of value influence in climate science are replaced by alternative social accounts of objectivity. We consider objectivity in Longino's sense as well as strong objectivity in Harding's sense to be relevant options here, because they offer concrete proposals that can guide scientific practice in evaluating and designing so-called multi-model ensembles and, in fine, improve their capacity to quantify and express uncertainty in climate projections.  相似文献   

7.
In climate science, climate models are one of the main tools for understanding phenomena. Here, we develop a framework to assess the fitness of a climate model for providing understanding. The framework is based on three dimensions: representational accuracy, representational depth, and graspability. We show that this framework does justice to the intuition that classical process-based climate models give understanding of phenomena. While simple climate models are characterized by a larger graspability, state-of-the-art models have a higher representational accuracy and representational depth. We then compare the fitness-for-providing understanding of process-based to data-driven models that are built with machine learning. We show that at first glance, data-driven models seem either unnecessary or inadequate for understanding. However, a case study from atmospheric research demonstrates that this is a false dilemma. Data-driven models can be useful tools for understanding, specifically for phenomena for which scientists can argue from the coherence of the models with background knowledge to their representational accuracy and for which the model complexity can be reduced such that they are graspable to a satisfactory extent.  相似文献   

8.
To study climate change, scientists employ computer models, which approximate target systems with various levels of skill. Given the imperfection of climate models, how do scientists use simulations to generate knowledge about the causes of observed climate change? Addressing a similar question in the context of biological modelling, Levins (1966) proposed an account grounded in robustness analysis. Recent philosophical discussions dispute the confirmatory power of robustness, raising the question of how the results of computer modelling studies contribute to the body of evidence supporting hypotheses about climate change. Expanding on Staley’s (2004) distinction between evidential strength and security, and Lloyd’s (2015) argument connecting variety-of-evidence inferences and robustness analysis, I address this question with respect to recent challenges to the epistemology robustness analysis. Applying this epistemology to case studies of climate change, I argue that, despite imperfections in climate models, and epistemic constraints on variety-of-evidence reasoning and robustness analysis, this framework accounts for the strength and security of evidence supporting climatological inferences, including the finding that global warming is occurring and its primary causes are anthropogenic.  相似文献   

9.
Projections of future climate change cannot rely on a single model. It has become common to rely on multiple simulations generated by Multi-Model Ensembles (MMEs), especially to quantify the uncertainty about what would constitute an adequate model structure. But, as Parker points out (2018), one of the remaining philosophically interesting questions is: “How can ensemble studies be designed so that they probe uncertainty in desired ways?” This paper offers two interpretations of what General Circulation Models (GCMs) are and how MMEs made of GCMs should be designed. In the first interpretation, models are combinations of modules and parameterisations; an MME is obtained by “plugging and playing” with interchangeable modules and parameterisations. In the second interpretation, models are aggregations of expert judgements that result from a history of epistemic decisions made by scientists about the choice of representations; an MME is a sampling of expert judgements from modelling teams. We argue that, while the two interpretations involve distinct domains from philosophy of science and social epistemology, they both could be used in a complementary manner in order to explore ways of designing better MMEs.  相似文献   

10.
The recent discussion on scientific representation has focused on models and their relationship to the real world. It has been assumed that models give us knowledge because they represent their supposed real target systems. However, here agreement among philosophers of science has tended to end as they have presented widely different views on how representation should be understood. I will argue that the traditional representational approach is too limiting as regards the epistemic value of modelling given the focus on the relationship between a single model and its supposed target system, and the neglect of the actual representational means with which scientists construct models. I therefore suggest an alternative account of models as epistemic tools. This amounts to regarding them as concrete artefacts that are built by specific representational means and are constrained by their design in such a way that they facilitate the study of certain scientific questions, and learning from them by means of construction and manipulation.  相似文献   

11.
Cities are not only major contributors to global climate change but also stand at the forefront of climate change impact. Quantifying and assessing the risk potentially induced by climate change has great significance for cities to undertake positive climate adaptation and risk prevention. However, most of the previous studies focus on global, national or regional dimensions, only a few have attempted to examine climate change risk at an urban scale and even less in the case of a recent literature review. As a result, a quantitative assessment of climate change risk for cities remains highly challenging. To fill this gap, the article makes a critical review of the recent literature on urban-scale climate change risk assessment, and classifies them into four major categories of studies which jointly constitute a stepwise modelling chain from global climate change towards urban-scale risk assessment. On this basis, the study summarizes the updated research progresses and discusses the major challenges to be overcome for the seamless coupling of climate simulation between different scales, the reproduction of compound climate events, the incorporation of non-market and long-lasting impacts and the representation of risk transmission insides or beyond a city. Furthermore, future directions to advance quantitative assessment of urban-scale climate change risk are highlighted, with fresh insights into improving study methodology, enriching knowledge of climate change impact on city, enhancing abundance and accessibility to data, and exploring the best practice to provide city-specific climate risk service.  相似文献   

12.
Well-known epistemologies of science have implications for how best to understand knowledge transfer (KT). Yet, to date, no serious attempt has been made to explicate these particular implications. This paper infers views about KT from two popular epistemologies; what we characterize as incommensurabilitist views (after Devitt, 2001; Bird, 2002, 2008; Sankey and Hoyningen-Huene 2013) and voluntarist views (after Van Fraassen, 1984; Dupré, 2001; Chakravartty, 2015). We argue views of the former sort define the methodological, ontological, and social conditions under which research operates within ‘different worlds’ (to use Kuhn's expression), and entail that genuine KTs under those conditions should be difficult or even impossible. By contrast, more liberal voluntarist views recognize epistemological processes that allow for transfers across different sciences even under such conditions. After outlining these antithetical positions, we identify two kinds of KTs present in well-known episodes in the history of ecology—specifically, successful model transfers from chemical kinetics and thermodynamics into areas of ecological research—which reveal significant limitations of incommensurabilitist views. We conclude by discussing how the selected examples support a pluralistic voluntarism regarding KT.  相似文献   

13.
An Erratum has been published for this article in Journal of Forecasting 23(6): 461 (2004) . This paper examines the problem of intrusion in computer systems that causes major breaches or allows unauthorized information manipulation. A new intrusion‐detection system using Bayesian multivariate regression is proposed to predict such unauthorized invasions before they occur and to take further action. We develop and use a multivariate dynamic linear model based on a unique approach leaving the unknown observational variance matrix distribution unspecified. The result is simultaneous forecasting free of the Wishart limitations that is proved faster and more reliable. Our proposed system uses software agent technology. The distributed software agent environment places an agent in each of the computer system workstations. The agent environment creates a user profile for each user. Every user has his or her profile monitored by the agent system and according to our statistical model prediction is possible. Implementation aspects are discussed using real data and an assessment of the model is provided. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

14.
Philosophers continue to debate both the actual and the ideal roles of values in science. Recently, Eric Winsberg has offered a novel, model-based challenge to those who argue that the internal workings of science can and should be kept free from the influence of social values. He contends that model-based assignments of probability to hypotheses about future climate change are unavoidably influenced by social values. I raise two objections to Winsberg’s argument, neither of which can wholly undermine its conclusion but each of which suggests that his argument exaggerates the influence of social values on estimates of uncertainty in climate prediction. I then show how a more traditional challenge to the value-free ideal seems tailor-made for the climate context.  相似文献   

15.
The translation of a mathematical model into a numerical one employs various modifications in order to make the model accessible for computation. Such modifications include discretizations, approximations, heuristic assumptions, and other methods. The paper investigates the divergent styles of mathematical and numerical models in the case of a specific piece of code in a current atmospheric model. Cognizance of these modifications means that the question of the role and function of scientific models has to be reworked. Neither are numerical models pure intermediaries between theory and data, nor are they autonomous tools of inquiry. Instead, theory and data are transformed into a new symbolic form of research due to the fact that computation has become an essential requirement for every scientific practice. Therefore the question is posed: What do numerical (climate) models really represent?  相似文献   

16.
In this paper, we explore the extent to which issues of simulation model validation take on novel characteristics when the models in question become particularly complex. Our central claim is that complex simulation models in general, and global models of climate in particular, face a form of confirmation holism. This holism, moreover, makes analytic understanding of complex models of climate either extremely difficult or even impossible. We argue that this supports a position we call convergence skepticism: the belief that the existence of a plurality of different models making a plurality of different forecasts of future climate is likely to be a persistent feature of global climate science.  相似文献   

17.
The present paper draws on climate science and the philosophy of science in order to evaluate climate-model-based approaches to assessing climate projections. We analyze the difficulties that arise in such assessment and outline criteria of adequacy for approaches to it. In addition, we offer a critical overview of the approaches used in the IPCC working group one fourth report, including the confidence building, Bayesian and likelihood approaches. Finally, we consider approaches that do not feature in the IPCC reports, including three approaches drawn from the philosophy of science. We find that all available approaches face substantial challenges, with IPCC approaches having as a primary source of difficulty their goal of providing probabilistic assessments.  相似文献   

18.
The paper examines Wesley Salmon’s claim that the primary role of plausibility arguments in the history of science is to impose constraints on the prior probability of hypotheses (in the language of Bayesian confirmation theory). A detailed look at Copernicanism and Darwinism and, more briefly, Rutherford’s discovery of the atomic nucleus reveals a further and arguably more important role of plausibility arguments. It resides in the consideration of likelihoods, which state how likely a given hypothesis makes a given piece of evidence. In each case the likelihoods raise the probability of one of the competing hypotheses and diminish the credibility of its rival, and this may happen either on the basis of ‘old’ or ‘new’ evidence.  相似文献   

19.
The goal of this paper is to provide an interpretation of Feyerabend's metaphysics of science as found in late works like Conquest of Abundance and Tyranny of Science. Feyerabend's late metaphysics consists of an attempt to criticize and provide a systematic alternative to traditional scientific realism, a package of views he sometimes referred to as “scientific materialism.” Scientific materialism is objectionable not only on metaphysical grounds, nor because it provides a poor ground for understanding science, but because it implies problematic claims about the epistemic and cultural authority of science, claims incompatible with situating science properly in democratic societies. I show how Feyerabend's metaphysical view, which I call “the abundant world” or “abundant realism,” constitute a sophisticated and challenging form of ontological pluralism that makes interesting connections with contemporary philosophy of science and issues of the political and policy role of science in a democratic society.  相似文献   

20.
The purpose of this paper is twofold. Firstly, to assess the merit of estimating probability density functions rather than level or classification estimations on a one‐day‐ahead forecasting task of the EUR/USD time series. This is implemented using a Gaussian mixture model neural network, benchmarking the results against standard forecasting models, namely a naïve model, a moving average convergence divergence technical model (MACD), an autoregressive moving average model (ARMA), a logistic regression model (LOGIT) and a multi‐layer perceptron network (MLP). Secondly, to examine the possibilities of improving the trading performance of those models with confirmation filters and leverage. While the benchmark models perform best without confirmation filters and leverage, the Gaussian mixture model outperforms all of the benchmarks when taking advantage of the possibilities offered by a combination of more sophisticated trading strategies and leverage. This might be due to the ability of the Gaussian mixture model to identify successfully trades with a high Sharpe ratio. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号