首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 265 毫秒
1.
Explanations implicitly end with something that makes sense, and begin with something that does not make sense. A statistical relationship, for example, a numerical fact, does not make sense; an explanation of this relationship adds something, such as causal information, which does make sense, and provides an endpoint for the sense-making process. Does social science differ from natural science in this respect? One difference is that in the natural sciences, models are what need “understanding.” In the social sciences, matters are more complex. There are models, such as causal models, which need to be understood, but also depend on background knowledge that goes beyond the model and the correlations that make it up, which produces a regress. The background knowledge is knowledge of in-filling mechanisms, which are normally made up of elements that involve the direct understanding of the acting and believing subjects themselves. These models, and social science explanations generally, are satisfactory only when they end the regress in this kind of understanding or use direct understanding evidence to decide between alternative mechanism explanations.  相似文献   

2.
It is widely believed that the underlying reality behind statistical mechanics is a deterministic and unitary time evolution of a many-particle wave function, even though this is in conflict with the irreversible, stochastic nature of statistical mechanics. The usual attempts to resolve this conflict for instance by appealing to decoherence or eigenstate thermalization are riddled with problems. This paper considers theoretical physics of thermalized systems as it is done in practice and shows that all approaches to thermalized systems presuppose in some form limits to linear superposition and deterministic time evolution. These considerations include, among others, the classical limit, extensivity, the concepts of entropy and equilibrium, and symmetry breaking in phase transitions and quantum measurement. As a conclusion, the paper suggests that the irreversibility and stochasticity of statistical mechanics should be taken as a real property of nature. It follows that a gas of a macroscopic number N of atoms in thermal equilibrium is best represented by a collection of N wave packets of a size of the order of the thermal de Broglie wave length, which behave quantum mechanically below this scale but classically sufficiently far beyond this scale. In particular, these wave packets must localize again after scattering events, which requires stochasticity and indicates a connection to the measurement process.  相似文献   

3.
I propose a distinct type of robustness, which I suggest can support a confirmatory role in scientific reasoning, contrary to the usual philosophical claims. In model robustness, repeated production of the empirically successful model prediction or retrodiction against a background of independently-supported and varying model constructions, within a group of models containing a shared causal factor, may suggest how confident we can be in the causal factor and predictions/retrodictions, especially once supported by a variety of evidence framework. I present climate models of greenhouse gas global warming of the 20th Century as an example, and emphasize climate scientists' discussions of robust models and causal aspects. The account is intended as applicable to a broad array of sciences that use complex modeling techniques.  相似文献   

4.
In this paper, I offer an alternative account of the relationship of Hobbesian geometry to natural philosophy by arguing that mixed mathematics provided Hobbes with a model for thinking about it. In mixed mathematics, one may borrow causal principles from one science and use them in another science without there being a deductive relationship between those two sciences. Natural philosophy for Hobbes is mixed because an explanation may combine observations from experience (the ‘that’) with causal principles from geometry (the ‘why’). My argument shows that Hobbesian natural philosophy relies upon suppositions that bodies plausibly behave according to these borrowed causal principles from geometry, acknowledging that bodies in the world may not actually behave this way. First, I consider Hobbes's relation to Aristotelian mixed mathematics and to Isaac Barrow's broadening of mixed mathematics in Mathematical Lectures (1683). I show that for Hobbes maker's knowledge from geometry provides the ‘why’ in mixed-mathematical explanations. Next, I examine two explanations from De corpore Part IV: (1) the explanation of sense in De corpore 25.1-2; and (2) the explanation of the swelling of parts of the body when they become warm in De corpore 27.3. In both explanations, I show Hobbes borrowing and citing geometrical principles and mixing these principles with appeals to experience.  相似文献   

5.
Scientists often diverge widely when choosing between research programs. This can seem to be rooted in disagreements about which of several theories, competing to address shared questions or phenomena, is currently the most epistemically or explanatorily valuable—i.e. most successful. But many such cases are actually more directly rooted in differing judgments of pursuit-worthiness, concerning which theory will be best down the line, or which addresses the most significant data or questions. Using case studies from 16th-century astronomy and 20th-century geology and biology, I argue that divergent theory choice is thus often driven by considerations of scientific process, even where direct epistemic or explanatory evaluation of its final products appears more relevant. Broadly following Kuhn's analysis of theoretical virtues, I suggest that widely shared criteria for pursuit-worthiness function as imprecise, mutually-conflicting values. However, even Kuhn and others sensitive to pragmatic dimensions of theory ‘acceptance’, including the virtue of fruitfulness, still commonly understate the role of pursuit-worthiness—especially by exaggerating the impact of more present-oriented virtues, or failing to stress how ‘competing’ theories excel at addressing different questions or data. This framework clarifies the nature of the choice and competition involved in theory choice, and the role of alternative theoretical virtues.  相似文献   

6.
I address questions about values in model-making in engineering, specifically: Might the role of values be attributable solely to interests involved in specifying and using the model? Selected examples illustrate the surprisingly wide variety of things one must take into account in the model-making itself. The notions of system (as used in engineering thermodynamics), and physically similar systems (as used in the physical sciences) are important and powerful in determining what is relevant to an engineering model. Another example (windfarms) illustrates how an idea to completely re-characterize, or reframe, an engineering problem arose during model-making.I employ a qualitative analogue of the notion of physically similar systems. Historical cases can thus be drawn upon; I illustrate with a comparison between a geoengineering proposal to inject, or spray, sulfate aerosols, and two different historical cases involving the spraying of DDT (fire ant eradication; malaria eradication). The current geoengineering proposal is seen to be like the disastrous and counterproductive case, and unlike the successful case, of the spraying of DDT. I conclude by explaining my view that model-making in science is analogous to moral perception in action, drawing on a view in moral theory that has come to be called moral particularism.  相似文献   

7.
This paper analyzes the metaphysical system developed in Cheyne’s Philosophical Principles of Religion. Cheyne was an early proponent of Newtonianism and tackled several philosophical questions raised by Newton’s work. The most pressing of these concerned the causal origin of gravitational attraction. Cheyne rejected the occasionalist explanations offered by several of his contemporaries in favor of a model on which God delegated special causal powers to bodies. Additionally, he developed an innovative approach to divine conservation. This allowed him to argue that Newton’s findings provided evidence for God’s existence and providence without the need for continuous divine intervention in the universe.  相似文献   

8.
In the Second Analogy, Kant argues that every event has a cause. It remains disputed what this conclusion amounts to. Does Kant argue only for the Weak Causal Principle that every event has some cause, or for the Strong Causal Principle that every event is produced according to a universal causal law? Existing interpretations have assumed that, by Kant’s lights, there is a substantive difference between the two. I argue that this is false. Kant holds that the concept of cause contains the notion of lawful connection, so it is analytic that causes operate according to universal laws. He is explicit about this commitment, not least in his derivation of the Categorical Imperative in Groundwork III. Consequently, Kant’s move from causal rules to universal laws is much simpler than previously assumed. Given his commitments, establishing the Strong Causal Principle requires no more argument than establishing the Weak Causal Principle.  相似文献   

9.
In the area of social science, in particular, although we have developed methods for reliably discovering the existence of causal relationships, we are not very good at using these to design effective social policy. Cartwright argues that in order to improve our ability to use causal relationships, it is essential to develop a theory of causation that makes explicit the connections between the nature of causation, our best methods for discovering causal relationships, and the uses to which these are put. I argue that Woodward's interventionist theory of causation is uniquely suited to meet Cartwright's challenge. More specifically, interventionist mechanisms can provide the bridge from ‘hunting causes’ to ‘using them’, if interventionists (i) tell us more about the nature of these mechanisms, and (ii) endorse the claim that it is these mechanisms—or whatever constitutes them—that make causal claims true. I illustrate how having an understanding of interventionist mechanisms can allow us to put causal knowledge to use via a detailed example from organic chemistry.  相似文献   

10.
It has recently been argued that successful evidence-based policy should rely on two kinds of evidence: statistical and mechanistic. The former is held to be evidence that a policy brings about the desired outcome, and the latter concerns how it does so. Although agreeing with the spirit of this proposal, we argue that the underlying conception of mechanistic evidence as evidence that is different in kind from correlational, difference-making or statistical evidence, does not correctly capture the role that information about mechanisms should play in evidence-based policy. We offer an alternative account of mechanistic evidence as information concerning the causal pathway connecting the policy intervention to its outcome. Not only can this be analyzed as evidence of difference-making, it is also to be found at any level and is obtainable by a broad range of methods, both experimental and observational. Using behavioral policy as an illustration, we draw the implications of this revised understanding of mechanistic evidence for debates concerning policy extrapolation, evidence hierarchies, and evidence integration.  相似文献   

11.
In what sense are associations between particular markers and complex behaviors made by genome-wide association studies (GWAS) and related techniques discoveries of, or entries into the study of, the causes of those behaviors? In this paper, we argue that when applied to individuals, the kinds of probabilistic ‘causes’ of complex traits that GWAS-style studies can point towards do not provide the kind of causal information that is useful for generating explanations; they do not, in other words, point towards useful explanations of why particular individuals have the traits that they do. We develop an analogy centered around Galton's “Quincunx” machine; while each pin might be associated with outcomes of a certain sort, in any particular trial, that pin might be entirely bypassed even if the ball eventually comes to rest in the box most strongly associated with that pin. Indeed, in any particular trial, the actual outcome of a ball hitting a pin might be the opposite of what is usually expected. While we might find particular pins associated with outcomes in the aggregate, these associations will not provide causally relevant information for understanding individual outcomes. In a similar way, the complexities of development likely render impossible any moves from population-level statistical associations between genetic markers and complex behaviors to an understanding of the causal processes by which individuals come to have the traits that they in fact have.  相似文献   

12.
Thermodynamics has a clear arrow of time, characterized by the irreversible approach to equilibrium. This stands in contrast to the laws of microscopic theories, which are invariant under time-reversal. Foundational discussions of this “problem of irreversibility” often focus on historical considerations, and do therefore not take results of modern physical research on this topic into account. In this article, I will close this gap by studying the implications of dynamical density functional theory (DDFT), a central method of modern nonequilibrium statistical mechanics not previously considered in philosophy of physics, for this debate. For this purpose, the philosophical discussion of irreversibility is structured into five problems, concerned with the source of irreversibility in thermodynamics, the definition of equilibrium and entropy, the justification of coarse-graining, the approach to equilibrium and the arrow of time. For each of these problems, it is shown that DDFT provides novel insights that are of importance for both physicists and philosophers of physics.  相似文献   

13.
Online auctions have become increasingly popular in recent years. There is a growing body of research on this topic, whereas modeling online auction price curves constitutes one of the most interesting problems. Most research treats price curves as deterministic functions, which ignores the random effects of external and internal factors. To account for the randomness, a more realistic model using stochastic differential equations is proposed in this paper. The online auction price is modeled by a stochastic differential equation in which the deterministic part is equivalent to the second‐order differential equation model proposed in Wang et al. (Journal of the American Statistical Association, 2008, 103, 1100–1118). The model also includes a component representing the measurement errors. Explicit expressions for the likelihood function are also obtained, from which statistical inference can be conducted. Forecast accuracy of the proposed model is compared with the ODE (ordinary differential equation) approach. Simulation results show that the proposed model performs better.  相似文献   

14.
The sciences are characterized by what is sometimes called a “methodological naturalism,” which disregards talk of divine agency. In response to those who argue that this reflects a dogmatic materialism, a number of philosophers have offered a pragmatic defense. The naturalism of the sciences, they argue, is provisional and defeasible: it is justified by the fact that unsuccessful theistic explanations have been superseded by successful natural ones. But this defense is inconsistent with the history of the sciences. The sciences have always exhibited what we call a domain naturalism. They have never invoked divine agency, but have always focused on the causal structure of the natural world. It is not the case, therefore, that the sciences once employed theistic explanations and then abandoned them. The naturalism of the sciences is as old as science itself.  相似文献   

15.
Experimental modeling is the construction of theoretical models hand in hand with experimental activity. As explained in Section 1, experimental modeling starts with claims about phenomena that use abstract concepts, concepts whose conditions of realization are not yet specified; and it ends with a concrete model of the phenomenon, a model that can be tested against data. This paper argues that this process from abstract concepts to concrete models involves judgments of relevance, which are irreducibly normative. In Section 2, we show, on the basis of several case studies, how these judgments contribute to the determination of the conditions of realization of the abstract concepts and, at the same time, of the quantities that characterize the phenomenon under study. Then, in Section 3, we compare this view on modeling with other approaches that also have acknowledged the role of relevance judgments in science. To conclude, in Section 4, we discuss the possibility of a plurality of relevance judgments and introduce a distinction between locally and generally relevant factors.  相似文献   

16.
In this paper, we shall describe a new account of information in communicational contexts, namely, a causal-deflationary one. Our approach draws from Timpson's deflationary view and supplies the field of philosophy of information with new tools that will help to clarify the underlying structure of communication: information is an abstract entity that must be involved in a causal link in order to achieve communication. In light of our account, communication is not merely the existence of statistical correlations between source and receiver, as usually understood from a purely formal view. Instead, communication is an asymmetric phenomenon involving causal notions: the destination system must be able to be causally manipulated by intervening on the source for successful communication. In a nutshell, we shall support the following lemma: no communication without manipulation.  相似文献   

17.
Constitutive mechanistic explanations are said to refer to mechanisms that constitute the phenomenon-to-be-explained. The most prominent approach of how to understand this relation is Carl Craver's mutual manipulability approach (MM) to constitutive relevance. Recently, MM has come under attack (Baumgartner and Casini 2017; Baumgartner and Gebharter 2015; Harinen 2014; Kästner 2017; Leuridan 2012; Romero 2015). It is argued that MM is inconsistent because, roughly, it is spelled out in terms of interventionism (which is an approach to causation), whereas constitutive relevance is said to be a non-causal relation. In this paper, I will discuss a strategy of how to resolve this inconsistency—so-called fat-handedness approaches (Baumgartner and Casini 2017; Baumgartner and Gebharter 2015; Romero 2015). I will argue that these approaches are problematic. I will present a novel suggestion for how to consistently define constitutive relevance in terms of interventionism. My approach is based on a causal interpretation of manipulability in terms of causal relations between the mechanism's components and what I will call temporal EIO-parts of the phenomenon. Still, this interpretation accounts for the fundamental difference between constitutive relevance and causal relevance.  相似文献   

18.
Simulations have been at the center of an important literature that has debated the extent to which they count as epistemologically on a par with traditional experiments. Critics have raised doubts about simulations being genuine experiments, on the ground that simulations seem to lack a distinctive feature of traditional experiments: i.e., the ability to causally interact with a target system. In this paper, we defend the view that simulations are indeed epistemologically on a par with traditional experiments. We first identify three possible ways of understanding the causal interaction claim. We then focus on the use of simulation in the discovery of the Higgs boson to show that in this paradigmatic case, simulations satisfy all three possible readings of the causal interaction claim.  相似文献   

19.
Both philosophers and scientists have recently promoted transparency as an important element of responsible scientific practice. Philosophers have placed particular emphasis on the ways that transparency can assist with efforts to manage value judgments in science responsibly. This paper examines a potential challenge to this approach, namely, that efforts to promote transparency can themselves be value-laden. This is particularly problematic when transparency incorporates second-order value judgments that are underwritten by the same values at stake in the desire for transparency about the first-order value judgments involved in scientific research. The paper uses a case study involving research on Lyme disease to illustrate this worry, but it responds by elucidating a range of scenarios in which transparency can still play an effective role in managing value judgments responsibly.  相似文献   

20.
In 2006, in a special issue of this journal, several authors explored what they called the dual nature of artefacts. The core idea is simple, but attractive: to make sense of an artefact, one needs to consider both its physical nature—its being a material object—and its intentional nature—its being an entity designed to further human ends and needs. The authors construe the intentional component quite narrowly, though: it just refers to the artefact’s function, its being a means to realize a certain practical end. Although such strong focus on functions is quite natural (and quite common in the analytic literature on artefacts), I argue in this paper that an artefact’s intentional nature is not exhausted by functional considerations. Many non-functional properties of artefacts—such as their marketability and ease of manufacture—testify to the intentions of their users/designers; and I show that if these sorts of considerations are included, one gets much more satisfactory explanations of artefacts, their design, and normativity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号