首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Calibration procedures establish a reliable relation between the final states (‘indications’) of a measurement process and features of the objects being measured (‘outcomes’). This article analyzes the inferential structure of calibration procedures. I show that calibration is a modelling activity, namely the activity of constructing, deriving predictions from, and testing theoretical and statistical models of a measurement process. Measurement outcomes are parameter value ranges that maximize the predictive accuracy and mutual coherence of such models, among other desiderata. This model-based view of calibration clarifies the source of objectivity of measurement outcomes, the nature of measurement accuracy, and the close relationship between measurement and prediction. Contrary to commonly held views, I argue that measurement standards are not necessary for calibration, although they are useful in maintaining coherence across large networks of measurement procedures.  相似文献   

2.
In the last decade much has been made of the role that models play in the epistemology of measurement. Specifically, philosophers have been interested in the role of models in producing measurement outcomes. This discussion has proceeded largely within the context of the physical sciences, with notable exceptions considering measurement in economics. However, models also play a central role in the methods used to develop instruments that purport to quantify psychological phenomena. These methods fall under the umbrella term ‘psychometrics’. In this paper, we focus on Clinical Outcome Assessments (COAs) and discuss two measurement theories and their associated models: Classical Test Theory (CTT) and Rasch Measurement Theory. We argue that models have an important role to play in coordinating theoretical terms with empirical content, but to do so they must serve: 1) as a representation of the measurement interaction; and 2) in conjunction with a theory of the attribute in which we are interested. We conclude that Rasch Measurement Theory is a more promising approach than CTT in these regards despite the latter's popularity with health outcomes researchers.  相似文献   

3.
Measurement is widely applied because its results are assumed to be more reliable than opinions and guesses, but this reliability is sometimes justified in a stereotyped way. After a critical analysis of such stereotypes, a structural characterization of measurement is proposed, as partly empirical and partly theoretical process, by showing that it is in fact the structure of the process that guarantees the reliability of its results. On this basis the role and the structure of background knowledge in measurement and the justification of the conditions of object-relatedness (“objectivity”) and subject-independence (“intersubjectivity”) of measurement are specifically discussed.  相似文献   

4.
Psychologists debate whether mental attributes can be quantified or whether they admit only qualitative comparisons of more and less. Their disagreement is not merely terminological, for it bears upon the permissibility of various statistical techniques. This article contributes to the discussion in two stages. First it explains how temperature, which was originally a qualitative concept, came to occupy its position as an unquestionably quantitative concept (§§1–4). Specifically, it lays out the circumstances in which thermometers, which register quantitative (or cardinal) differences, became distinguishable from thermoscopes, which register merely qualitative (or ordinal) differences. I argue that this distinction became possible thanks to the work of Joseph Black, ca. 1760. Second, the article contends that the model implicit in temperature’s quantitative status offers a better way for thinking about the quantitative status of mental attributes than models from measurement theory (§§5–6).  相似文献   

5.
In this paper, I argue for a distinction between two scales of coordination in scientific inquiry, through which I reassess Georg Simon Ohm's work on conductivity and resistance. Firstly, I propose to distinguish between measurement coordination, which refers to the specific problem of how to justify the attribution of values to a quantity by using a certain measurement procedure, and general coordination, which refers to the broader issue of justifying the representation of an empirical regularity by means of abstract mathematical tools. Secondly, I argue that the development of Ohm's measurement practice between the first and the second experimental phase of his work involved the change of the measurement coordination on which he relied to express his empirical results. By showing how Ohm relied on different calibration assumptions and practices across the two phases, I demonstrate that the concurrent change of both Ohm's experimental apparatus and the variable that Ohm measured should be viewed based on the different form of measurement coordination. Finally, I argue that Ohm's assumption that tension is equally distributed in the circuit is best understood as part of the general coordination between Ohm's law and the empirical regularity that it expresses, rather than measurement coordination.  相似文献   

6.
Empirical success is a central criterion for scientific decision-making. Yet its understanding in philosophical studies of science deserves renewed attention: Should philosophers think differently about the advancement of science when they deal with the uncertainty of outcome in ongoing research in comparison with historical episodes? This paper argues that normative appeals to empirical success in the evaluation of competing scientific explanations can result in unreliable conclusions, especially when we are looking at the changeability of direction in unsettled investigations. The challenges we encounter arise from the inherent dynamics of disciplinary and experimental objectives in research practice. In this paper we discuss how these dynamics inform the evaluation of empirical success by analyzing three of its requirements: data accommodation, instrumental reliability, and predictive power. We conclude that the assessment of empirical success in developing inquiry is set against the background of a model's interactive success and prospective value in an experimental context. Our argument is exemplified by the analysis of an apparent controversy surrounding the model of a quantum nose in research on olfaction. Notably, the public narrative of this controversy rests on a distorted perspective on measures of empirical success.  相似文献   

7.
The paper deals with the confusion that has arisen in studying the revival of geodesy in Paris in the 1730s. The episode highlights the vast qualitative differences in science-reporting to be found in periodicals of the early eighteenth century, and the actual roles that certain better-known journals played in the genesis of what became a trademark for eighteenth-century Parisian science.  相似文献   

8.
During the 1930s and 1940s, American physical organic chemists employed electronic theories of reaction mechanisms to construct models offering explanations of organic reactions. But two molecular rearrangements presented enormous challenges to model construction. The Claisen and Cope rearrangements were predominantly inaccessible to experimental investigation and they confounded explanation in theoretical terms. Drawing on the idea that models can be autonomous agents in the production of scientific knowledge, I argue that one group of models in particular were functionally autonomous from the Hughes–Ingold theory. Cope and Hardy’s models of the Claisen and Cope rearrangements were resources for the exploration of the Hughes–Ingold theory that otherwise lacked explanatory power. By generating ‘how-possibly’ explanations, these models explained how these rearrangements could happen rather than why they did happen. Furthermore, although these models were apparently closely connected to theory in terms of their construction, I argue that partial autonomy issued in extra-logical factors concerning the attitudes of American chemists to the Hughes–Ingold theory. And in the absence of a complete theoretical hegemony, a degree of consensus was reached concerning modelling the Claisen rearrangement mechanism.  相似文献   

9.
I consider the way Wittgenstein employed some kinds of sound recordings (but not others) in discussing logical form in the Tractatus logico-philosophicus. The year that Ludwig Wittgenstein was born in Vienna, 1889, nearby developments already underway portended two major changes of the coming century: the advent of controlled heavier than air flight and the mass production of musical sound recordings. Before they brought about major social changes, though, these innovations appeared in Europe in the form of children’s toys. Wittgenstein uses the fact that a symphony performance can be constructed from both a written musical score and the grooves of a gramophone record to explain what logical form is. His characterization of logical form in the Tractatus in terms of intertranslatability rather than in terms of interpretability is highlighted by reflecting on the kinds of examples of sound recordings that he did not use to illustrate the notion of logical form. There were other well known technologies for making visual records of sound at the time, but these did not serve to illustrate logical form.  相似文献   

10.
The considerations set out in the paper are intended to suggest that in practical contexts predictive power does not play the outstanding roles sometimes accredited to it in an epistemic framework. Rather, predictive power is part of a network of other merits and achievements. Predictive power needs to be judged differently according to the specific conditions that apply. First, predictions need to be part of an explanatory framework if they are supposed to guide actions reliably. Second, in scientific expertise, the demand for accurate predictions is replaced with the objective of specifying a robust corridor of estimates. Finally, it is highly uncertain to predict the success of research projects. The overall purpose of the paper is to enlarge the debate about predictions by addressing specifically the roles of predictions in application-oriented research.  相似文献   

11.
Psychophysics measures the attributes of perceptual experience. The question of whether some of these attributes should be interpreted as more fundamental, or “real,” than others has been answered differently throughout its history. The operationism of Stevens and Boring answers “no,” reacting to the perceived vacuity of earlier debates about fundamentality. The subsequent rise of multidimensional scaling (MDS) implicitly answers “yes” in its insistence that psychophysical data be represented in spaces of low dimensionality. I argue the return of fundamentality follows from a trend toward increasing epistemic humility. Operationism exhibited a kind of hubris in the constitutive role it assigned to the experimenter's presuppositions that is abandoned by the algorithmic methods of MDS. This broad epistemic trend is illustrated by following the trajectory of research on a particular candidate attribute: tonal volume.  相似文献   

12.
This article explores the impact of 16th and 17th-century developments in micrometry on the methods Antoni van Leeuwenhoek employed to measure the microscopic creatures he discovered in various samples collected from his acquaintances and from local water sources. While other publications have presented Leeuwenhoek's measurement methods, an examination of the context of his techniques is missing. These previous measurement methods, driven by the need to improve navigation, surveying, astronomy, and ballistics, may have had an impact on Leeuwenhoek's methods. Leeuwenhoek was educated principally in the mercantile guild system in Amsterdam and Delft. He rose to positions of responsibility within Delft municipal government. These were the years that led up to his first investigations using the single-lens microscopes he became expert at creating, and that led to his first letter to the Royal Society in 1673. He also took measures to train in surveying and liquid assaying practices existing in his time, disciplines that were influenced by Pedro Nunes, Pierre Vernier, Rene Descartes, and others. While we may never know what inspired Leeuwenhoek's methods, the argument is presented that there were sufficient influences in his life to shape his approach to measuring the invisible.  相似文献   

13.
In this paper, I introduce a new historical case study into the scientific realism debate. During the late-eighteenth century, the Scottish natural philosopher James Hutton made two important successful novel predictions. The first concerned granitic veins intruding from granite masses into strata. The second concerned what geologists now term “angular unconformities”: older sections of strata overlain by younger sections, the two resting at different angles, the former typically more inclined than the latter. These predictions, I argue, are potentially problematic for selective scientific realism in that constituents of Hutton's theory that would not be considered even approximately true today played various roles in generating them. The aim here is not to provide a full philosophical analysis but to introduce the case into the debate by detailing the history and showing why, at least prima facie, it presents a problem for selective realism. First, I explicate Hutton's theory. I then give an account of Hutton's predictions and their confirmations. Next, I explain why these predictions are relevant to the realism debate. Finally, I consider which constituents of Hutton's theory are, according to current beliefs, true (or approximately true), which are not (even approximately) true, and which were responsible for these successes.  相似文献   

14.
Although contemporary sociologists of science have sometimes claimed Max Weber as a methodological precursor, they have not examined Weber's own writings about science. Between 1908 and 1912 Weber published a series of critical studies of the extension of scientific authority into public life. The most notable of these concerned attempts to implement the experimental psychology or psycho-physics laboratory in factories and other real-world settings. Weber's critique centered on the problem of social measurement. He emphasized the discontinuities between the space of the laboratory and that of the factory, showing how several qualitative and historically conditioned differences between the two settings rendered the transfer of instruments and methods between them highly problematic. Weber's critical arguments prepared the ground for his greatest foray into empirical sociology, a survey he directed for the Verein für Sozialpolitik investigating the conditions and attitudes affecting the lives and performance of industrial workers. Using a different measuring instrument — the questionnaire — Weber tried to implement a concept of social measurement which implied a different ontology, drawn not from natural sciences but from the historical sciences.  相似文献   

15.
The epistemic problem of assessing the support that some evidence confers on a hypothesis is considered using an extended example from the history of meteorology. In this case, and presumably in others, the problem is to develop techniques of data analysis that will link the sort of evidence that can be collected to hypotheses of interest. This problem is solved by applying mathematical tools to structure the data and connect them to the competing hypotheses. I conclude that mathematical innovations provide crucial epistemic links between evidence and theories precisely because the evidence and theories are mathematically described.  相似文献   

16.
Among the alternatives of non-relativistic quantum mechanics (NRQM) there are those that give different predictions than quantum mechanics in yet-untested circumstances, while remaining compatible with current empirical findings. In order to test these predictions, one must isolate one's system from environmental induced decoherence, which, on the standard view of NRQM, is the dynamical mechanism that is responsible for the ‘apparent’ collapse in open quantum systems. But while recent advances in condensed-matter physics may lead in the near future to experimental setups that will allow one to test the two hypotheses, namely genuine collapse vs. decoherence, hence make progress toward a solution to the quantum measurement problem, those philosophers and physicists who are advocating an information-theoretic approach to the foundations of quantum mechanics are still unwilling to acknowledge the empirical character of the issue at stake. Here I argue that in doing so they are displaying an unwarranted double standard.  相似文献   

17.
In the received version of the development of science, natural kinds are established in the preliminary stages (natural history) and made more precise by measurement (exact science). By examining the move from nineteenth- to twentieth-century biology, this paper unpacks the notion of species as ‘natural kinds’ and grounds for discourse, questioning received notions about both kinds and species. Life sciences in the nineteenth century established several ‘monster-barring’ techniques to block disputes about the precise definition of species. Counterintuitively, precision and definition brought dispute and disrupted exchange. Thus, any attempt to add precision was doomed to failure. By intervening and measuring, the new experimental biology dislocated the established links between natural kinds and kinds of people and institutions. New kinds were built in new places. They were made to measure from the very start. This paper ends by claiming that there was no long-standing ‘species problem’ in the history of biology. That problem is a later construction of the ‘modern synthesis’, well after the disruption of ‘kinds’ and kinds of people. Only then would definitions and precision matter. A new, non-linguistic, take on the incommensurability thesis is hinted at.  相似文献   

18.
Scenario‐planning academicians and practitioners have been observing for more than three decades the importance of this method in dealing with environmental uncertainty. However, there has been no valid scale that may help organizational leaders to act in practice. Our review of prior studies identifies some problems related to conceptualization, reliability, and validity of this construct. We address these concerns by developing and validating a measure of scenario planning based on Churchill's paradigm (Journal of Marketing Research, 1979, 16, 64–73). Our data analysis follows from a sample of 133 managers operating in the healthcare field in France. To validate our scale, we used three approaches: first, an exploratory factor analysis; second, an examination of psychometric proprieties of all dimensions; and third, a confirmatory factor analysis. The results of this study indicate that scenario planning is a multidimensional construct composed of three dimensions: information acquisition, knowledge dissemination, and scenario development and strategic choices.  相似文献   

19.
20.
Zusammenfassung Durch Vergleich respirometrisch und kalorimetrisch bestimmter Daten über den Energieverbrauch während der Verpuppung vonTenebrio molitor wird gezeigt, dass die letztgenennte Methode der ersteren ebenbürtig ist, bei feuchtigkeitsliebenden Arten sogar überlegen sein müsste.

I am most grateful to LKB-Produkter AB., S-161125 Bromma 1, Sweden and their staff for the extended loan of a modified Batch Microcalorimeter.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号