首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In this paper I take a close look at the SI base quantity “amount of substance”, and its unit, the mole. The mole was introduced as a base unit in the SI in 1971, and there is currently a proposal to change its definition. The current definition of the mole shows a certain ambiguity regarding the nature of the quantity “amount of substance”. The proposed new definition removes the ambiguity, but at a cost: it becomes difficult to justify treating amount of substance as having its own dimension, and hence its own unit, the mole. I argue that the difficulties with amount of substance result from its role as a mediator between macroscopic and microscopic scales. To understand why amount of substance might have its own dimension, we need to connect amount of substance to mass, contra current proposals to separate them.  相似文献   

2.
Many consider the apparent disappearance of time and change in quantum gravity the main metaphysical challenge since it seems to lead to a form of Parmenidean view according to which the physical world simply is, nothing changes, moves, becomes, happens. In this paper, I argue that the main metaphysical challenge of Rovelli’s philosophical view of loop quantum gravity is to lead exactly to the opposite view, namely, a form of Heraclitean view, or rather, of radical process metaphysics according to which there is becoming (process, change, event) but not being (substance, stasis, thing). However, this does not entail that time is real. Fundamentally, time does not exist. I show how Rovelli’s understanding of loop quantum gravity supports the view that there is change without time, so that the physical world can be timeless yet ever-changing. I conclude by arguing that it is such a process-oriented conception that constitutes the revolutionary metaphysical challenge and philosophical significance of loop quantum gravity, while the alleged Parmenidean view turns out to be nothing but the endpoint of a long-standing metaphysical orthodoxy.  相似文献   

3.
The Wigner–Eckart theorem is central to the application of symmetry principles throughout atomic, molecular, and nuclear physics. Nevertheless, the theorem has a puzzling feature: it is dispensable for solving problems within these domains, since elementary methods suffice. To account for the significance of the theorem, I first contrast it with an elementary approach to calculating matrix elements. Next, I consider three broad strategies for interpreting the theorem: conventionalism, fundamentalism, and conceptualism. I argue that the conventionalist framework is unnecessarily pragmatic, while the fundamentalist framework requires more ontological commitments than necessary. Conceptualism avoids both defects, accounting for the theorem’s significance in terms of how it epistemically restructures the calculation of matrix elements. Specifically, the Wigner–Eckart theorem modularizes and unifies matrix element problems, thereby changing what we need to know to solve them.  相似文献   

4.
Peter Lipton argues that inference to the best explanation (IBE) involves the selection of a hypothesis on the basis of its loveliness. I argue that in optimal cases of IBE we may be able to eliminate all but one of the hypotheses. In such cases we have a form of eliminative induction takes place, which I call ‘Holmesian inference’. I argue that Lipton’s example in which Ignaz Semmelweis identified a cause of puerperal fever better illustrates Holmesian inference than Liptonian IBE. I consider in detail the conditions under which Holmesian inference is possible and conclude by considering the epistemological relations between Holmesian inference and Liptonian IBE.  相似文献   

5.
In this paper, I argue for a distinction between two scales of coordination in scientific inquiry, through which I reassess Georg Simon Ohm's work on conductivity and resistance. Firstly, I propose to distinguish between measurement coordination, which refers to the specific problem of how to justify the attribution of values to a quantity by using a certain measurement procedure, and general coordination, which refers to the broader issue of justifying the representation of an empirical regularity by means of abstract mathematical tools. Secondly, I argue that the development of Ohm's measurement practice between the first and the second experimental phase of his work involved the change of the measurement coordination on which he relied to express his empirical results. By showing how Ohm relied on different calibration assumptions and practices across the two phases, I demonstrate that the concurrent change of both Ohm's experimental apparatus and the variable that Ohm measured should be viewed based on the different form of measurement coordination. Finally, I argue that Ohm's assumption that tension is equally distributed in the circuit is best understood as part of the general coordination between Ohm's law and the empirical regularity that it expresses, rather than measurement coordination.  相似文献   

6.
Bogen and Woodward's distinction between data and phenomena raises the need to understand the structure of the data-to-phenomena and theory-to-phenomena inferences. I suggest that one way to study the structure of these inferences is to analyze the role of the assumptions involved in the inferences: What kind of assumptions are they? How do these assumptions contribute to the practice of identifying phenomena? In this paper, using examples from atmospheric dynamics, I develop an account of the practice of identifying the target in the data-to-phenomena and theory-to-phenomena inferences in which assumptions about spatiotemporal scales play a central role in the identification of parameters that describe the target system. I also argue that these assumptions are not only empirical but they are also idealizing and abstracting. I conclude the paper with a reflection on the role of idealizations in modeling.  相似文献   

7.
There is a long-standing debate in the philosophy of mind and philosophy of science regarding how best to interpret the relationship between neuroscience and psychology. It has traditionally been argued that either the two domains will evolve and change over time until they converge on a single unified account of human behaviour, or else that they will continue to work in isolation given that they identify properties and states that exist autonomously from one another (due to the multiple-realizability of psychological states). In this paper, I argue that progress in psychology and neuroscience is contingent on the fact that both of these positions are false. Contra the convergence position, I argue that the theories of psychology and the theories of neuroscience are scientifically valuable as representational tools precisely because they cannot be integrated into a single account. However, contra the autonomy position, I propose that the theories of psychology and neuroscience are deeply dependent on one another for further refinement and improvement. In this respect, there is an irreconcilable codependence between psychology and neuroscience that is necessary for both domains to improve and progress. The two domains are forever linked while simultaneously being unable to integrate.  相似文献   

8.
Summary A relationship is observed between the action of certain solvents on cancerogenesis and the affinity of histamine for cancerproducing substances. Tw-type solvents, which increase the tumoral incidence brought about by a chemical substance, intensify the reaction of histamine with this substance; while PEG-type solvents, which retard tumorogenesis, inhibit the reaction.  相似文献   

9.
The recent discovery of the Higgs at 125 GeV by the ATLAS and CMS experiments at the LHC has put significant pressure on a principle which has guided much theorizing in high energy physics over the last 40 years, the principle of naturalness. In this paper, I provide an explication of the conceptual foundations and physical significance of the naturalness principle. I argue that the naturalness principle is well-grounded both empirically and in the theoretical structure of effective field theories, and that it was reasonable for physicists to endorse it. Its possible failure to be realized in nature, as suggested by recent LHC data, thus represents an empirical challenge to certain foundational aspects of our understanding of QFT. In particular, I argue that its failure would undermine one class of recent proposals which claim that QFT provides us with a picture of the world as being structured into quasi-autonomous physical domains.  相似文献   

10.
In this second paper, I continue my discussion of the problem of reference for scientific realism. First, I consider a final objection to Kitcher’s account of reference, which I generalise to other accounts of reference. Such accounts make attributions of reference by appeal to our pretheoretical intuitions about how true statements ought to be distibuted among the scientific utterances of the past. I argue that in the cases that merit discussion, this strategy fails because our intuitions are unstable. The interesting cases are importantly borderline—it really isn’t clear what we ought to say about how those terms referred. I conclude that in many relevant cases, our grounds for thinking that the theoretical terms of the past referred are matched by our grounds for thinking that they failed to refer, in such a way that deciding on either result is arbitrary and bad news for the realist. In response to this problem, in the second part of the paper I expand upon Field’s (1973) account of partial reference to sketch a new way of thinking about the theoretical terms of the past—that they partially referred and partially failed to refer.  相似文献   

11.
This paper considers Newton’s position on gravity’s cause, both conceptually and historically. With respect to the historical question, I argue that while Newton entertained various hypotheses about gravity’s cause, he never endorsed any of them, and in particular, his lack of confidence in the hypothesis of robust and unmediated distant action by matter is explained by an inclination toward certain metaphysical principles. The conceptual problem about gravity’s cause, which I identified earlier along with a deeper problem about individuating substances, is that a decisive conclusion is impossible unless certain speculative aspects of his empiricism are abandoned. In this paper, I situate those conceptual problems in Newton’s natural philosophy. They arise from ideas that push empiricism to potentially self-defeating limits, revealing the danger of allowing immaterial spirits any place in natural philosophy, especially spatially extended spirits supposed capable of co-occupying place with material bodies. Yet because their source ideas are speculative, Newton’s method ensures that these problems pose no threat to his rational mechanics or the profitable core of his empiricism. They are easily avoided by avoiding their source ideas, and when science emerges from natural philosophy, it does so with an ontology unencumbered by immaterial spirits.  相似文献   

12.
I distinguish between two ways in which Kuhn employs the concept of incommensurability based on for whom it presents a problem. First, I argue that Kuhn’s early work focuses on the comparison and underdetermination problems scientists encounter during revolutionary periods (actors’ incommensurability) whilst his later work focuses on the translation and interpretation problems analysts face when they engage in the representation of science from earlier periods (analysts’ incommensurability). Secondly, I offer a new interpretation of actors’ incommensurability. I challenge Kuhn’s account of incommensurability which is based on the compartmentalisation of the problems of both underdetermination and non-additivity to revolutionary periods. Through employing a finitist perspective, I demonstrate that in principle these are also problems scientists face during normal science. I argue that the reason why in certain circumstances scientists have little difficulty in concurring over their judgements of scientific findings and claims while in others they disagree needs to be explained sociologically rather than by reference to underdetermination or non-additivity. Thirdly, I claim that disagreements between scientists should not be couched in terms of translation or linguistic problems (aspects of analysts’ incommensurability), but should be understood as arising out of scientists’ differing judgments about how to take scientific inquiry further.  相似文献   

13.
It is frequently said that belief aims at truth, in an explicitly normative sense—that is, that one ought to believe the proposition that p if, and only if, p is true. This truth norm is frequently invoked to explain why we should seek evidential justification in our beliefs, or why we should try to be rational in our belief formation—it is because we ought to believe the truth that we ought to follow the evidence in belief revision. In this paper, I argue that this view is untenable. The truth norm clashes with plausible evidential norms in a wide range of cases, such as when we have excellent but misleading evidence for a falsehood or no evidence for a truth. I will consider various ways to resolve this conflict and argue that none of them work. However, I will ultimately attempt to vindicate the love of truth, by arguing that knowledge is the proper epistemic goal. The upshot is that we should not aim merely to believe the truth; we should aim to know it.  相似文献   

14.
Summary Electrophoretic analysis of bacterial substance proves the departure, from UV irradiated microbes screened from light for a certain time, of substances which were responsible for increasing the charge density by surface unit during the first moments which follow the irradiation by UV rays. Spectrophotometric analysis of liquids having contained these bacilli proves concommitantly the diffusion of those substances from the bacterial substance to the aqueous phase and reveals their nature. The photoreactivity which is possible during the first moments following the UV irradiation of bacilli, due to the presence of the fragments of depolymerisation of nucleic acid macromolecule in the bacterial substance, becomes impossible after the departure (diffusion) of those fragments to the aqueous phase. These observations seem to show that the phenomenon of photo-reactivity is connected with the repolymerisation of the macromolecule primitively divided of desoxyribonucleic acid.  相似文献   

15.
Some conceptual issues in the foundations of classical electrodynamics concerning the interaction between particles and fields have recently received increased attention among philosophers of physics. After a brief review of the debate, I argue that there are essentially two incompatible solutions to these issues corresponding to F.A. Muller's distinction between the extension and the renormalization program. Neither of these solutions comes free of cost: the extension program is plagued with all problems related to extended elementary charges, the renormalization program works with point charges but trades in the notorious divergences of the field energies. The aim of this paper is to bring back into the discussion a third alternative, the action-at-a-distance program, which avoids both the riddles of extended elementary charges as well as the divergences although it admittedly has other problems. It will be discussed, why action-at-a-distance theories are actually not a far cry from particle–field theories, and I will argue that the main reasons for rejecting action-at-a-distance theories originate in certain metaphysical prejudices about locality and energy conservation. I will broadly suggest how these concepts could be adapted in order to allow for action at a distance.  相似文献   

16.
The recent discussion on scientific representation has focused on models and their relationship to the real world. It has been assumed that models give us knowledge because they represent their supposed real target systems. However, here agreement among philosophers of science has tended to end as they have presented widely different views on how representation should be understood. I will argue that the traditional representational approach is too limiting as regards the epistemic value of modelling given the focus on the relationship between a single model and its supposed target system, and the neglect of the actual representational means with which scientists construct models. I therefore suggest an alternative account of models as epistemic tools. This amounts to regarding them as concrete artefacts that are built by specific representational means and are constrained by their design in such a way that they facilitate the study of certain scientific questions, and learning from them by means of construction and manipulation.  相似文献   

17.
The neural vehicles of mental representation play an explanatory role in cognitive psychology that their realizers do not. Cognitive psychology individuates neural structures as representational vehicles in terms of the specific causal properties to which cognitive mechanisms are sensitive. Explanations that appeal to properties of vehicles can capture generalisations which are not available at the level of their neural realizers. In this paper, I argue that the individuation of realizers as vehicles restricts the sorts of explanations in which they can participate. I illustrate this with reference to Rupert’s (2011) claim that representational vehicles can play an explanatory role in psychology in virtue of their quantity or proportion. I propose that such quantity-based explanatory claims can apply only to realizers and not to vehicles, in virtue of the particular causal role that vehicles play in psychological explanations.  相似文献   

18.
The analytical notions of ‘thought style’, ‘paradigm’, ‘episteme’ and ‘style of reasoning’ are some of the most popular frameworks in the history and philosophy of science. Although their proponents, Ludwik Fleck, Thomas Kuhn, Michel Foucault, and Ian Hacking, are all part of the same philosophical tradition that closely connects history and philosophy, the extent to which they share similar assumptions and objectives is still under debate. In the first part of the paper, I shall argue that, despite the fact that these four thinkers disagree on certain assumptions, their frameworks have the same explanatory goal – to understand how objectivity is possible. I shall present this goal as a necessary element of a common project -- that of historicising Kant's a priori. In the second part of the paper, I shall make an instrumental use of the insights of these four thinkers to form a new model for studying objectivity. I shall also propose a layered diagram that allows the differences between the frameworks to be mapped, while acknowledging their similarities. This diagram will show that the frameworks of style of reasoning and episteme illuminate conditions of possibility that lie at a deeper level than those considered by thought styles and paradigms.  相似文献   

19.
In 2006, in a special issue of this journal, several authors explored what they called the dual nature of artefacts. The core idea is simple, but attractive: to make sense of an artefact, one needs to consider both its physical nature—its being a material object—and its intentional nature—its being an entity designed to further human ends and needs. The authors construe the intentional component quite narrowly, though: it just refers to the artefact’s function, its being a means to realize a certain practical end. Although such strong focus on functions is quite natural (and quite common in the analytic literature on artefacts), I argue in this paper that an artefact’s intentional nature is not exhausted by functional considerations. Many non-functional properties of artefacts—such as their marketability and ease of manufacture—testify to the intentions of their users/designers; and I show that if these sorts of considerations are included, one gets much more satisfactory explanations of artefacts, their design, and normativity.  相似文献   

20.
I argue that the key principle of microgravity is what I have called elsewhere the Lorentzian strategy. This strategy may be seen as either a reverse-engineering approach or a descent with modification approach, but however one sees if the method works neither by attempting to propound a theory that is the quantum version of either an extant or generalized gravitation theory nor by attempting to propound a theory that is the final version of quantum mechanics and finding gravity within it. Instead the method works by beginning with what we are pretty sure is a good approximation to the low-energy limit of whatever the real microprocesses are that generate what we experience as gravitation. This method is powerful, fruitful, and not committed to principles for which we have, as yet, only scant evidence; the method begins with what we do know and teases out what we can know next. The principle is methodological, not ontological.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号