首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 126 毫秒
1.
A strong version of scientism, such as that of Alex Rosenberg, says, roughly, that natural science reliably delivers rational belief or knowledge, whereas common sense sources of belief, such as moral intuition, memory, and introspection, do not. In this paper I discuss ten reasons that adherents of scientism have or might put forward in defence of scientism. The aim is to show which considerations could plausibly count in favour of scientism and what this implies for the way scientism ought to be formulated. I argue that only three out of these ten reasons potentially hold water and that the evidential weight is, therefore, on their shoulders. These three reasons for embracing scientism are, respectively, particular empirical arguments to the effect that there are good debunking explanations for certain common sense beliefs, that there are incoherences and biases in the doxastic outputs of certain common sense sources of belief, and that beliefs that issue from certain common sense doxastic sources are illusory. From what I argue, it follows that only a version of scientism that is significantly weaker than many versions of scientism that we find in the literature is potentially tenable. I conclude the paper by stating what such a significantly weaker version of scientism could amount to.  相似文献   

2.
In this paper I argue that the Strong Programme’s aim to provide robust explanations of belief acquisition is limited by its commitment to the symmetry principle. For Bloor and Barnes, the symmetry principle is intended to drive home the fact that epistemic norms are socially constituted. My argument here is that even if our epistemic standards are fully naturalized—even relativized—they nevertheless can play a pivotal role in why individuals adopt the beliefs that they do. Indeed, sometimes the fact that a belief is locally endorsed as rational is the only reason why an individual holds it. In this way, norms of rationality have a powerful and unique role in belief formation. But if this is true then the symmetry principle’s emphasis on ‘sameness of type’ is misguided. It has the undesirable effect of not just naturalizing our cognitive commitments, but trivializing them. Indeed, if the notion of ‘similarity’ is to have any content, then we are not going to classify as ‘the same’ beliefs that are formed in accordance with deeply entrenched epistemic norms as ones formed without reflection on these norms, or ones formed in spite of these norms. My suggestion here is that we give up the symmetry principle in favor of a more sophisticated principle, one that allows for a taxonomy of causes rich enough to allow us to delineate the unique impact epistemic norms have on those individuals who subscribe to them.  相似文献   

3.
Predictivism is the view that successful predictions of “novel” evidence carry more confirmational weight than accommodations of already known evidence. Novelty, in this context, has traditionally been conceived of as temporal novelty. However temporal predictivism has been criticized for lacking a rationale: why should the time order of theory and evidence matter? Instead, it has been proposed, novelty should be construed in terms of use-novelty, according to which evidence is novel if it was not used in the construction of a theory. Only if evidence is use-novel can it fully support the theory entailing it. As I point out in this paper, the writings of the most influential proponent of use-novelty contain a weaker and a stronger version of use-novelty. However both versions, I argue, are problematic. With regard to the appraisal of Mendeleev’ periodic table, the most contentious historical case in the predictivism debate, I argue that temporal predictivism is indeed supported, although in ways not previously appreciated. On the basis of this case, I argue for a form of so-called symptomatic predictivism according to which temporally novel predictions carry more confirmational weight only insofar as they reveal the theory’s presumed coherence of facts as real.  相似文献   

4.
In this paper, we develop and refine the idea that understanding is a species of explanatory knowledge. Specifically, we defend the idea that S understands why p if and only if S knows that p, and, for some q, Ss true belief that q correctly explains p is produced/maintained by reliable explanatory evaluation. We then show how this model explains the reception of James Bjorken’s explanation of scaling by the broader physics community in the late 1960s and early 1970s. The historical episode is interesting because Bjorken’s explanation initially did not provide understanding to other physicists, but was subsequently deemed intelligible when Feynman provided a physical interpretation that led to experimental tests that vindicated Bjorken’s model. Finally, we argue that other philosophical models of scientific understanding are best construed as limiting cases of our more general model.  相似文献   

5.
Extensional scientific realism is the view that each believable scientific theory is supported by the unique first-order evidence for it and that if we want to believe that it is true, we should rely on its unique first-order evidence. In contrast, intensional scientific realism is the view that all believable scientific theories have a common feature and that we should rely on it to determine whether a theory is believable or not. Fitzpatrick argues that extensional realism is immune, while intensional realism is not, to the pessimistic induction. I reply that if extensional realism overcomes the pessimistic induction at all, that is because it implicitly relies on the theoretical resource of intensional realism. I also argue that extensional realism, by nature, cannot embed a criterion for distinguishing between believable and unbelievable theories.  相似文献   

6.
I revisit an older defense of scientific realism, the methodological defense, a defense developed by both Popper and Feyerabend. The methodological defense of realism concerns the attitude of scientists, not philosophers of science. The methodological defense is as follows: a commitment to realism leads scientists to pursue the truth, which in turn is apt to put them in a better position to get at the truth. In contrast, anti-realists lack the tenacity required to develop a theory to its fullest. As a consequence, they are less likely to get at the truth.My aim is to show that the methodological defense is flawed. I argue that a commitment to realism does not always benefit science, and that there is reason to believe that a research community with both realists and anti-realists in it may be better suited to advancing science. A case study of the Copernican Revolution in astronomy supports this claim.  相似文献   

7.
It is commonly argued that values “fill the logical gap” of underdetermination of theory by evidence, namely, values affect our choice between two or more theories that fit the same evidence. The underdetermination model, however, does not exhaust the roles values play in evidential reasoning. I introduce WAVE – a novel account of the logical relations between values and evidence. WAVE states that values influence evidential reasoning by adjusting evidential weights. I argue that the weight-adjusting role of values is distinct from their underdetermination gap-filling role. Values adjust weights in three ways. First, values affect our trust in the testimony of others. Second, values influence the evidential thresholds required for justified epistemic judgments. Third, values influence the relative weight of a certain type of evidence within a body of multimodal discordant evidence. WAVE explains, from an epistemic perspective, rather than psychological, how smokers, for example, can find the same evidence about the dangers of smoking less persuasive than non-smokers. WAVE allows for a wider effect of values on our accepted scientific theories and beliefs than the effect for which the underdetermination model allows alone; therefore, science studies scholars must consider WAVE in their research and analysis of evidential case studies.  相似文献   

8.
It is generally accepted that Popper‘s degree of corroboration, though “inductivist” in a very general and weak sense, is not inductivist in a strong sense, i.e. when by ‘inductivism’ we mean the thesis that the right measure of evidential support has a probabilistic character. The aim of this paper is to challenge this common view by arguing that Popper can be regarded as an inductivist, not only in the weak broad sense but also in a narrower, probabilistic sense. In section 2, first, I begin by briefly characterizing the relevant notion of inductivism that is at stake here; second, I present and discuss the main Popperian argument against it and show that in the only reading in which the argument is formally it is restricted to cases of predicted evidence, and that even if restricted in this way the argument is formally valid it is nevertheless materially unsound. In section 3, I analyze the desiderata that, according to Popper, any acceptable measure for evidential support must satisfy, I clean away its ad-hoc components and show that all the remaining desiderata are satisfied by inductuvist-in-strict-sense measures. In section 4 I demonstrate that two of these desiderata, accepted by Popper, imply that in cases of predicted evidence any measure that satisfies them is qualitatively indistinguishable from conditional probability. Finally I defend that this amounts to a kind of strong inductivism that enters into conflict with Popper’s anti-inductivist argument and declarations, and that this conflict does not depend on the incremental versus non-incremental distinction for evidential-support measures, making Popper’s position inconsistent in any reading.  相似文献   

9.
I argue that we should consider Norton's material theory of induction as consisting of two largely independent claims. First, there is the claim that material facts license inductions - a claim which I interpret as a type of contextualism about induction. Second, there is the claim that there are no universal rules of induction. While a good case can be made for the first claim, I believe that Norton's arguments for the second claim are lacking. In particular, I spell out Norton's argument against the claim that all induction may be reduced to inference to the best explanation, and argue that it is not persuasive. Rejecting this part of Norton's theory does not however require us to abandon the first claim that material facts license inductions. In this way, I distinguish the parts of the material theory of induction we should happily accept from the parts about which we should be more skeptical.  相似文献   

10.
Many philosophers who do not analyze laws of nature as the axioms and theorems of the best deductive systems nevertheless believe that membership in those systems is evidence for being a law. This raises the question, “If the best systems analysis fails, what explains the fact that being a member of the best systems is evidence for being a law?” In this essay I answer this question on behalf of Leibniz. I argue that although Leibniz’s philosophy of laws is inconsistent with the best systems analysis, his philosophy of nature’s perfection enables him to explain why membership in the best systems is evidence for being a law of nature.  相似文献   

11.
I began this study with Laudan's argument from the pessimistic induction and I promised to show that the caloric theory of heat cannot be used to support the premisses of the meta-induction on past scientific theories. I tried to show that the laws of experimental calorimetry, adiabatic change and Carnot's theory of the motive power of heat were (i) independent of the assumption that heat is a material substance, (ii) approximately true, (iii) deducible and accounted for within thermodynamics.I stressed that results (i) and (ii) were known to most theorists of the caloric theory and that result (iii) was put forward by the founders of the new thermodynamics. In other words, the truth-content of the caloric theory was located, selected carefully, and preserved by the founders of thermodynamics.However, the reader might think that even if I have succeeded in showing that laudan is wrong about the caloric theory, I have not shown how the strategy followed in this paper can be generalised against the pessimistic meta-induction. I think that the general strategy against Laudan's argument suggested in this paper is this: the empirical success of a mature scientific theory suggests that there are respects and degrees in which this theory is true. The difficulty for — and and real challenge to — philosophers of science is to suggest ways in which this truth-content can be located and shown to be preserved — if at all — to subsequent theories. In particular, the empirical success of a theory does not, automatically, suggest that all theoretical terms of the theory refer. On the contrary, judgments of referential success depend on which theoretical claims are well-supported by the evidence. This is a matter of specific investigation. Generally, one would expect that claims about theoretical entities which are not strongly supported by the evidence or turn out to be independent of the evidence at hand, are not compelling. For simply, if the evidence does not make it likely that our beliefs about putative theoretical entities are approximately correct, a belief in those entities would be ill-founded and unjustified. Theoretical extrapolations in science are indespensable , but they are not arbitrary. If the evidence does not warrant them I do not see why someone should commit herself to them. In a sense, the problem with empricist philisophers is not that they demand that theoretical beliefs must be warranted by evidence. Rather, it is that they claim that no evidence can warrant theorretical beliefs. A realist philosopher of science would not disagree on the first, but she has good grounds to deny the second.I argued that claims about theoretical entities which are not strongly supported by the evidence must not be taken as belief-worthy. But can one sustaon the more ambitious view that loosely supported parts of a theory tend to be just those that include non-referring terms? There is an obvious excess risk in such a generalisation. For there are well-known cases in which a theoretical claim was initially weakly supported by the evidence  相似文献   

12.
The most public-facing forms of contemporary Darwinism happily promote its worldview ambitions. Popular works, by the likes of Richard Dawkins, deflect associations with eugenics and social Darwinism, but also extend the reach of Darwinism beyond biology into social policy, politics, and ethics. Critics of the enterprise fall into two categories. Advocates of Intelligent Design and secular philosophers (like Mary Midgley and Thomas Nagel) recognise it as a worldview and argue against its implications. Scholars in the rhetoric of science or science communication, however, typically take the view that Darwinism isn't a worldview, but a scientific theory, which has been improperly embellished by some; they uphold the distinction between is and ought and argue that science is restricted to the former. This prompts an is–ought problem on another level. I catalogue the ways in which Darwinism plainly is a worldview and why commentators' beliefs that it ought not to be distorts their analysis. Hence, it is their own worldview that precludes them from accepting Darwinism's worldview implications.  相似文献   

13.
I distinguish between two ways in which Kuhn employs the concept of incommensurability based on for whom it presents a problem. First, I argue that Kuhn’s early work focuses on the comparison and underdetermination problems scientists encounter during revolutionary periods (actors’ incommensurability) whilst his later work focuses on the translation and interpretation problems analysts face when they engage in the representation of science from earlier periods (analysts’ incommensurability). Secondly, I offer a new interpretation of actors’ incommensurability. I challenge Kuhn’s account of incommensurability which is based on the compartmentalisation of the problems of both underdetermination and non-additivity to revolutionary periods. Through employing a finitist perspective, I demonstrate that in principle these are also problems scientists face during normal science. I argue that the reason why in certain circumstances scientists have little difficulty in concurring over their judgements of scientific findings and claims while in others they disagree needs to be explained sociologically rather than by reference to underdetermination or non-additivity. Thirdly, I claim that disagreements between scientists should not be couched in terms of translation or linguistic problems (aspects of analysts’ incommensurability), but should be understood as arising out of scientists’ differing judgments about how to take scientific inquiry further.  相似文献   

14.
A theory is usually said to be time reversible if whenever a sequence of states S1(t1), S2(t2), S3(t3) is possible according to that theory, then the reverse sequence of time reversed states S3T(t1), S2T(t2), S1T(t3) is also possible according to that theory; i.e., one normally not only inverts the sequence of states, but also operates on the states with a time reversal operator T. David Albert and Paul Horwich have suggested that one should not allow such time reversal operations T on states. I will argue that time reversal operations on fundamental states should be allowed. I will furthermore argue that the form that time reversal operations take is determined by the type of fundamental geometric quantities that occur in nature and that we have good reason to believe that the fundamental geometric quantities that occur in nature correspond to irreducible representations of the Lorentz transformations. Finally, I will argue that we have good reason to believe that space-time has a temporal orientation.  相似文献   

15.
Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may converge upon a correct estimate or not depending on the social structure of the community that uses it. Based on this study, I argue that methodological explanations of the “replicability crisis” in psychology are limited and propose an alternative explanation in terms of biases. Finally, I conclude suggesting that scientific self-correction should be understood as an interaction effect between inference methods and social structures.  相似文献   

16.
This paper revisits the debate between Harry Collins and Allan Franklin, concerning the experimenters' regress. Focusing my attention on a case study from recent psychology (regarding experimental evidence for the existence of a Mozart Effect), I argue that Franklin is right to highlight the role of epistemological strategies in scientific practice, but that his account does not sufficiently appreciate Collins's point about the importance of tacit knowledge in experimental practice. In turn, Collins rightly highlights the epistemic uncertainty (and skepticism) surrounding much experimental research. However, I will argue that his analysis of tacit knowledge fails to elucidate the reasons why scientists often are (and should be) skeptical of other researchers' experimental results. I will present an analysis of tacit knowledge in experimental research that not only answers to this desideratum, but also shows how such skepticism can in fact be a vital enabling factor for the dynamic processes of experimental knowledge generation.  相似文献   

17.
Historians of science have frequently sought to exclude modern scientific knowledge from their narratives. Part I of this paper, published in the previous issue, cautioned against seeing more than a literary preference at work here. In particular, it was argued—contra advocates of the Sociology of Scientific Knowledge (SSK)—that a commitment to epistemological relativism should not be seen as having straightforward historiographical consequences. Part II considers further SSK-inspired attempts to entangle the currently fashionable historiography with particular positions in the philosophy of science. None, I argue, is promising. David Bloor’s proposed alliance with scientific realism relies upon a mistaken view of contrastive explanation; Andrew Pickering’s appeal to instrumentalism is persuasive for particle physics but much less so for science as a whole; and Bruno Latour’s home-grown metaphysics is so bizarre that its compatibility with SSK is, if anything, a further blow to the latter’s plausibility.  相似文献   

18.
The logical links between the Judaeo-Christian doctrine of creation and the practice of natural philosophy on the one hand, and the rejection of belief in demonic agency on the other, were made explicit in the seventeenth century by, among others, Balthasar Bekker (1634–98), whose ideas I argue to have been not without influence. In section 1, I present the accounts of three historians of the opposition to belief in witchcraft and of the decline of the witch-persecution, Hugh Trevor-Roper, Keith Thomas, and Brian Easlea. In section 2, I maintain that Bekker has been underestimated both by Trevor-Roper and by Easlea. In section 3, I investigate more generally some of the connections between the new natural philosophy and belief in supernatural interventions, cast doubt on the view that rejection of belief in witchcraft and the devil requires rejection of belief in creation, and thus supplement or qualify the accounts of Trevor-Roper, Thomas, and Easlea of why belief in witchcraft faded away.  相似文献   

19.
I   s     
Richard Arthur (2006) and I (Savitt, 2009) proposed that the present in (time-oriented) Minkowski spacetime should be thought of as a small causal diamond. That is, given two timelike separated events p and q, with p earlier than q, we suggested that the present (relative to those two events) is the set I+(p)∩I-(q). Mauro Dorato (2011) presents three criticisms of this proposal. I rebut all three and then examine two more plausible criticisms of the Arthur/Savitt proposal. I argue that these criticisms also fail.  相似文献   

20.
A common intuition about evidence is that if data x have been used to construct a hypothesis H, then x should not be used again in support of H. It is no surprise that x fits H, if H was deliberately constructed to accord with x. The question of when and why we should avoid such “double-counting” continues to be debated in philosophy and statistics. It arises as a prohibition against data mining, hunting for significance, tuning on the signal, and ad hoc hypotheses, and as a preference for predesignated hypotheses and “surprising” predictions. I have argued that it is the severity or probativeness of the test—or lack of it—that should determine whether a double-use of data is admissible. I examine a number of surprising ambiguities and unexpected facts that continue to bedevil this debate.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号