首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
First, I argue that scientific progress is possible in the absence of increasing verisimilitude in science's theories. Second, I argue that increasing theoretical verisimilitude is not the central, or primary, dimension of scientific progress. Third, I defend my previous argument that unjustified changes in scientific belief may be progressive. Fourth, I illustrate how false beliefs can promote scientific progress in ways that cannot be explicated by appeal to verisimilitude.  相似文献   

2.
It is frequently said that belief aims at truth, in an explicitly normative sense—that is, that one ought to believe the proposition that p if, and only if, p is true. This truth norm is frequently invoked to explain why we should seek evidential justification in our beliefs, or why we should try to be rational in our belief formation—it is because we ought to believe the truth that we ought to follow the evidence in belief revision. In this paper, I argue that this view is untenable. The truth norm clashes with plausible evidential norms in a wide range of cases, such as when we have excellent but misleading evidence for a falsehood or no evidence for a truth. I will consider various ways to resolve this conflict and argue that none of them work. However, I will ultimately attempt to vindicate the love of truth, by arguing that knowledge is the proper epistemic goal. The upshot is that we should not aim merely to believe the truth; we should aim to know it.  相似文献   

3.
In his book Thing Knowledge Davis Baird argues that our accustomed understanding of knowledge as justified true beliefs is not enough to understand progress in science and technology. To be more accurate he argues that scientific instruments are to be seen as a form of “objective knowledge” in the sense of Karl Popper.I want to examine if this idea is plausible. In a first step I want to show that this proposal implies that nearly all man-made artifacts are materialized objective knowledge. I argue that this radical change in our concept of knowledge demands strong reasons and that Baird does not give them. I take a look at the strongest strand of arguments of Baird's book—the arguments from cognitive autonomy—and conclude that they do not suffice to make Baird's view of scientific instruments tenable.  相似文献   

4.
In this paper, I argue that, contrary to the constructive empiricist’s position, observability is not an adequate criterion as a guide to ontological commitment in science. My argument has two parts. First, I argue that the constructive empiricist’s choice of observability as a criterion for ontological commitment is based on the assumption that belief in the existence of unobservable entities is unreasonable because belief in the existence of an entity can only be vindicated by its observation. Second, I argue that the kind of ontological commitment that is under consideration when accepting a scientific theory is commitment to what I call theoretical kinds and that observation can vindicate commitment to kinds only in exceptional cases.  相似文献   

5.
John Venn is known as one of the clearest expounders of the interpretation of probability as the frequency of a particular outcome in a potentially unlimited series of possible events. This view he held to be incompatible with the alternate interpretation of probability as a measure of the degree of belief that would rationally be held about a certain outcome based upon the reliability of testimony and other prior information. This paper explores the reasons why Venn may have been so opposed to the degree-of-belief interpretation and suggests that it may have been a way for him to resolve a conflict in his own mind between his ideas of proper scientific methods of inference and the religious beliefs that he held as a young man.  相似文献   

6.
In 2006, in a special issue of this journal, several authors explored what they called the dual nature of artefacts. The core idea is simple, but attractive: to make sense of an artefact, one needs to consider both its physical nature—its being a material object—and its intentional nature—its being an entity designed to further human ends and needs. The authors construe the intentional component quite narrowly, though: it just refers to the artefact’s function, its being a means to realize a certain practical end. Although such strong focus on functions is quite natural (and quite common in the analytic literature on artefacts), I argue in this paper that an artefact’s intentional nature is not exhausted by functional considerations. Many non-functional properties of artefacts—such as their marketability and ease of manufacture—testify to the intentions of their users/designers; and I show that if these sorts of considerations are included, one gets much more satisfactory explanations of artefacts, their design, and normativity.  相似文献   

7.
Hempel seems to hold the following three views: (H1) Understanding is pragmatic/relativistic: Whether one understands why X happened in terms of Explanation E depends on one's beliefs and cognitive abilities; (H2) Whether a scientific explanation is good, just like whether a mathematical proof is good, is a nonpragmatic and objective issue independent of the beliefs or cognitive abilities of individuals; (H3) The goal of scientific explanation is understanding: A good scientific explanation is the one that provides understanding. Apparently, H1, H2, and H3 cannot be all true. Some philosophers think that Hempel is inconsistent, while some others claim that Hempel does not actually hold H3. I argue that Hempel does hold H3 and that he can consistently hold all of H1, H2, and H3 if he endorses what I call the “understanding argument.” I also show how attributing the understanding argument to Hempel can make more sense of his D-N model and his philosophical analysis of the pragmatic aspects of scientific explanation.  相似文献   

8.
In this paper, I consider Kitcher’s (1993) account of reference for the expressions of past science. Kitcher’s case study is of Joseph Priestley and his expression ‘dephlogisticated air’. There is a strong intuitive case that ‘dephlogisticated air’ referred to oxygen, but it was underpinned by very mistaken phlogiston theory, so concluding either that dephlogisticated air referred straightforwardly or that it failed to refer both have unpalatable consequences. Kitcher argues that the reference of such terms is best considered relative to each token—some tokens refer, and others do not. His account thus relies crucially on how this distinction between tokens can be made good—a puzzle I call the discrimination problem. I argue that the discrimination problem cannot be solved. On any reading of Kitcher’s defence of the distinction, the grounds provided are either insufficient or illegitimate. On the first reading, Kitcher violates the principle of humanity by making Priestley’s referential success a matter of the mental contents of modern speakers. The second reading sidesteps the problem of beliefs by appealing to mind-independent facts, but I argue that these are insufficient to achieve reference because of the indeterminacy introduced by the qua problem. On the third and final reading, Priestley’s success is given by what he would say in counterfactual circumstances. I argue that even if there are facts about what Priestley would say, and there is reason for doubt, there is no motivation to think that such facts determine how Priestley referred in the actual world.  相似文献   

9.
I argue that the key principle of microgravity is what I have called elsewhere the Lorentzian strategy. This strategy may be seen as either a reverse-engineering approach or a descent with modification approach, but however one sees if the method works neither by attempting to propound a theory that is the quantum version of either an extant or generalized gravitation theory nor by attempting to propound a theory that is the final version of quantum mechanics and finding gravity within it. Instead the method works by beginning with what we are pretty sure is a good approximation to the low-energy limit of whatever the real microprocesses are that generate what we experience as gravitation. This method is powerful, fruitful, and not committed to principles for which we have, as yet, only scant evidence; the method begins with what we do know and teases out what we can know next. The principle is methodological, not ontological.  相似文献   

10.
I examine two challenges that collaborative research raises for science. First, collaborative research threatens the motivation of scientists. As a result, I argue, collaborative research may have adverse effects on what sorts of things scientists can effectively investigate. Second, collaborative research makes it more difficult to hold scientists accountable. I argue that the authors of multi-authored articles are aptly described as plural subjects, corporate bodies that are more than the sum of the individuals involved. Though journal editors do not currently conceive of the authors of multi-authored articles this way, this conception provides us with the conceptual resources to make sense of how collaborating scientists behave.  相似文献   

11.
Case Study research is characterized by the employment of multiple data gathering methods. In this paper, I examine the concurrent use of participant observation and qualitative interviews. The question I examine is: what is the rationale behind their combination in case study research? In the literature on case study research, the two most common reasons for using multiple methods appeal to comprehensiveness and convergent confirmation respectively. I argue that there is a third significant, yet overlooked, way to motivate the joint use of participant observation and qualitative interviews: the methods may generate complementary evidence and this puts the researcher in a better position to confirm that her data manifest central epistemic values and so are suitable as a basis for providing an adequate answer to her research question. I refer to this as the rationale of “blended epistemic value validation”.  相似文献   

12.
Existential risks, particularly those arising from emerging technologies, are a complex, obstinate challenge for scientific study. This should motivate studying how the relevant scientific communities might be made more amenable to studying such risks. I offer an account of scientific creativity suitable for thinking about scientific communities, and provide reasons for thinking contemporary science doesn't incentivise creativity in this specified sense. I'll argue that a successful science of existential risk will be creative in my sense. So, if we want to make progress on those questions we should consider how to shift scientific incentives to encourage creativity. The analysis also has lessons for philosophical approaches to understanding the social structure of science. I introduce the notion of a ‘well-adapted’ science: one in which the incentive structure is tailored to the epistemic situation at hand.  相似文献   

13.
In this paper I argue that the Strong Programme’s aim to provide robust explanations of belief acquisition is limited by its commitment to the symmetry principle. For Bloor and Barnes, the symmetry principle is intended to drive home the fact that epistemic norms are socially constituted. My argument here is that even if our epistemic standards are fully naturalized—even relativized—they nevertheless can play a pivotal role in why individuals adopt the beliefs that they do. Indeed, sometimes the fact that a belief is locally endorsed as rational is the only reason why an individual holds it. In this way, norms of rationality have a powerful and unique role in belief formation. But if this is true then the symmetry principle’s emphasis on ‘sameness of type’ is misguided. It has the undesirable effect of not just naturalizing our cognitive commitments, but trivializing them. Indeed, if the notion of ‘similarity’ is to have any content, then we are not going to classify as ‘the same’ beliefs that are formed in accordance with deeply entrenched epistemic norms as ones formed without reflection on these norms, or ones formed in spite of these norms. My suggestion here is that we give up the symmetry principle in favor of a more sophisticated principle, one that allows for a taxonomy of causes rich enough to allow us to delineate the unique impact epistemic norms have on those individuals who subscribe to them.  相似文献   

14.
In publications in 1914 and 1918, Einstein claimed that his new theory of gravity in some sense relativizes the rotation of a body with respect to the distant stars (a stripped-down version of Newton's rotating bucket experiment) and the acceleration of the traveler with respect to the stay-at-home in the twin paradox. What he showed was that phenomena seen as inertial effects in a space-time coordinate system in which the non-accelerating body is at rest can be seen as a combination of inertial and gravitational effects in a (suitably chosen) space-time coordinate system in which the accelerating body is at rest. Two different relativity principles play a role in these accounts: (a) the relativity of non-uniform motion, in the weak sense that the laws of physics are the same in the two space-time coordinate systems involved; (b) what Einstein in 1920 called the relativity of the gravitational field, the notion that there is a unified inertio-gravitational field that splits differently into inertial and gravitational components in different coordinate systems. I provide a detailed reconstruction of Einstein's rather sketchy accounts of the twins and the bucket and examine the role of these two relativity principles. I argue that we can hold on to (b) but that (a) is either false or trivial.  相似文献   

15.
The symmetries of a physical theory are often associated with two things: conservation laws (via e.g. Noether׳s and Schur׳s theorems) and representational redundancies (“gauge symmetry”). But how can a physical theory׳s symmetries give rise to interesting (in the sense of non-trivial) conservation laws, if symmetries are transformations that correspond to no genuine physical difference? In this paper, I argue for a disambiguation in the notion of symmetry. The central distinction is between what I call “analytic” and “synthetic“ symmetries, so called because of an analogy with analytic and synthetic propositions. “Analytic“ symmetries are the turning of idle wheels in a theory׳s formalism, and correspond to no physical change; “synthetic“ symmetries cover all the rest. I argue that analytic symmetries are distinguished because they act as fixed points or constraints in any interpretation of a theory, and as such are akin to Poincaré׳s conventions or Reichenbach׳s ‘axioms of co-ordination’, or ‘relativized constitutive a priori principles’.  相似文献   

16.
In this paper, I introduce a new historical case study into the scientific realism debate. During the late-eighteenth century, the Scottish natural philosopher James Hutton made two important successful novel predictions. The first concerned granitic veins intruding from granite masses into strata. The second concerned what geologists now term “angular unconformities”: older sections of strata overlain by younger sections, the two resting at different angles, the former typically more inclined than the latter. These predictions, I argue, are potentially problematic for selective scientific realism in that constituents of Hutton's theory that would not be considered even approximately true today played various roles in generating them. The aim here is not to provide a full philosophical analysis but to introduce the case into the debate by detailing the history and showing why, at least prima facie, it presents a problem for selective realism. First, I explicate Hutton's theory. I then give an account of Hutton's predictions and their confirmations. Next, I explain why these predictions are relevant to the realism debate. Finally, I consider which constituents of Hutton's theory are, according to current beliefs, true (or approximately true), which are not (even approximately) true, and which were responsible for these successes.  相似文献   

17.
In this paper we take a close look at current interdisciplinary modeling practices in the environmental sciences, and suggest that closer attention needs to be paid to the nature of scientific practices when investigating and planning interdisciplinarity. While interdisciplinarity is often portrayed as a medium of novel and transformative methodological work, current modeling strategies in the environmental sciences are conservative, avoiding methodological conflict, while confining interdisciplinary interactions to a relatively small set of pre-existing modeling frameworks and strategies (a process we call crystallization). We argue that such practices can be rationalized as responses in part to cognitive constraints which restrict interdisciplinary work. We identify four salient integrative modeling strategies in environmental sciences, and argue that this crystallization, while contradicting somewhat the novel goals many have for interdisciplinarity, makes sense when considered in the light of common disciplinary practices and cognitive constraints. These results provide cause to rethink in more concrete methodological terms what interdisciplinarity amounts to, and what kinds of interdisciplinarity are obtainable in the environmental sciences and elsewhere.  相似文献   

18.
I argue that some important elements of the current cosmological model are 'conventionalist’ in the sense defined by Karl Popper. These elements include dark matter and dark energy; both are auxiliary hypotheses that were invoked in response to observations that falsified the standard model as it existed at the time. The use of conventionalist stratagems in response to unexpected observations implies that the field of cosmology is in a state of 'degenerating problemshift’ in the language of Imre Lakatos. I show that the 'concordance’ argument, often put forward by cosmologists in support of the current paradigm, is weaker than the convergence arguments that were made in the past in support of the atomic theory of matter or the quantization of energy.  相似文献   

19.
A conventional wisdom about the progress of physics holds that successive theories wholly encompass the domains of their predecessors through a process that is often called “reduction.” While certain influential accounts of inter-theory reduction in physics take reduction to require a single “global” derivation of one theory׳s laws from those of another, I show that global reductions are not available in all cases where the conventional wisdom requires reduction to hold. However, I argue that a weaker “local” form of reduction, which defines reduction between theories in terms of a more fundamental notion of reduction between models of a single fixed system, is available in such cases and moreover suffices to uphold the conventional wisdom. To illustrate the sort of fixed-system, inter-model reduction that grounds inter-theoretic reduction on this picture, I specialize to a particular class of cases in which both models are dynamical systems. I show that reduction in these cases is underwritten by a mathematical relationship that follows a certain liberalized construal of Nagel/Schaffner reduction, and support this claim with several examples. Moreover, I show that this broadly Nagelian analysis of inter-model reduction encompasses several cases that are sometimes cited as instances of the “physicist׳s” limit-based notion of reduction.  相似文献   

20.
How are we to understand the use of probability in corroboration functions? Popper says logically, but does not show we could have access to, or even calculate, probability values in a logical sense. This makes the logical interpretation untenable, as Ramsey and van Fraassen have argued.If corroboration functions only make sense when the probabilities employed therein are subjective, however, then what counts as impressive evidence for a theory might be a matter of convention, or even whim. So isn’t so-called ‘corroboration’ just a matter of psychology?In this paper, I argue that we can go some way towards addressing this objection by adopting an intersubjective interpretation, of the form advocated by Gillies, with respect to corroboration. I show why intersubjective probabilities are preferable to subjective ones when it comes to decision making in science: why group decisions are liable to be superior to individual ones, given a number of plausible conditions. I then argue that intersubjective corroboration is preferable to intersubjective confirmation of a Bayesian variety, because there is greater opportunity for principled agreement concerning the factors involved in the former.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号