首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
James McAllister’s 2003 article, ‘Algorithmic randomness in empirical data’, claims that empirical data sets are algorithmically random, and hence incompressible. We show that this claim is mistaken. We present theoretical arguments and empirical evidence for compressibility, and discuss the matter in the framework of Minimum Message Length (MML) inference.  相似文献   

2.
According to a traditional view, scientific laws and theories constitute algorithmic compressions of empirical data sets collected from observations and measurements. This article defends the thesis that, to the contrary, empirical data sets are algorithmically incompressible. The reason is that individual data points are determined partly by perturbations, or causal factors that cannot be reduced to any pattern. If empirical data sets are incompressible, then they exhibit maximal algorithmic complexity, maximal entropy and zero redundancy. They are therefore maximally efficient carriers of information about the world. Since, on algorithmic information theory, a string is algorithmically random just if it is incompressible, the thesis entails that empirical data sets consist of algorithmically random strings of digits. Rather than constituting compressions of empirical data, scientific laws and theories pick out patterns that data sets exhibit with a certain noise.  相似文献   

3.
Summary Studies were performed on 203 pairs of dog carotid arteries subjected to unidirectional radial compression. Treatment with 80 U/ml purified elastase for 90 min decreased radial stress, but treatment with 640 U/ml collagenase for 90 min did not. These data suggest that elastin, but not collagen, contributes to wall resistance to radial compression.  相似文献   

4.
Causal set theory and the theory of linear structures (which has recently been developed by Tim Maudlin as an alternative to standard topology) share some of their main motivations. In view of that, I raise and answer the question how these two theories are related to each other and to standard topology. I show that causal set theory can be embedded into Maudlin׳s more general framework and I characterise what Maudlin׳s topological concepts boil down to when applied to discrete linear structures that correspond to causal sets. Moreover, I show that all topological aspects of causal sets that can be described in Maudlin׳s theory can also be described in the framework of standard topology. Finally, I discuss why these results are relevant for evaluating Maudlin׳s theory. The value of this theory depends crucially on whether it is true that (a) its conceptual framework is as expressive as that of standard topology when it comes to describing well-known continuous as well as discrete models of spacetime and (b) it is even more expressive or fruitful when it comes to analysing topological aspects of discrete structures that are intended as models of spacetime. On one hand, my theorems support (a). The theory is rich enough to incorporate causal set theory and its definitions of topological notions yield a plausible outcome in the case of causal sets. On the other hand, the results undermine (b). Standard topology, too, has the conceptual resources to capture those topological aspects of causal sets that are analysable within Maudlin׳s framework. This fact poses a challenge for the proponents of Maudlin׳s theory to prove it fruitful.  相似文献   

5.
In this paper we investigate the feasibility of algorithmically deriving precise probability forecasts from imprecise forecasts. We provide an empirical evaluation of precise probabilities that have been derived from two types of imprecise probability forecasts: probability intervals and probability intervals with second-order probability distributions. The minimum cross-entropy (MCE) principle is applied to the former to derive precise (i.e. additive) probabilities; expectation (EX) is used to derive precise probabilities in the latter case. Probability intervals that were constructed without second-order probabilities tended to be narrower than and contained in those that were amplified by second-order probabilities. Evidence that this narrowness is due to motivational bias is presented. Analysis of forecasters' mean Probability Scores for the derived precise probabilities indicates that it is possible to derive precise forecasts whose external correspondence is as good as directly assessed precise probability forecasts. The forecasts of the EX method, however, are more like the directly assessed precise forecasts than those of the MCE method.  相似文献   

6.
7.
Standard objections to the notion of a hedged, or ceteris paribus, law of nature usually boil down to the claim that such laws would be either (1) irredeemably vague, (2) untestable, (3) vacuous, (4) false, or a combination thereof. Using epidemiological studies in nutrition science as an example, I show that this is not true of the hedged law-like generalizations derived from data models used to interpret large and varied sets of empirical observations. Although it may be ‘in principle impossible’ to construct models that explicitly identify all potential causal interferers with the relevant generalization, the view that our failure to do so is fatal to the very notion of a cp-law is plausible only if one illicitly infers metaphysical impossibility from epistemic impossibility. I close with the suggestion that a model-theoretic approach to cp-laws poses a problem for recent attempts to formulate a Mill–Ramsey–Lewis theory of cp-laws.  相似文献   

8.
The recent discovery of the Higgs at 125 GeV by the ATLAS and CMS experiments at the LHC has put significant pressure on a principle which has guided much theorizing in high energy physics over the last 40 years, the principle of naturalness. In this paper, I provide an explication of the conceptual foundations and physical significance of the naturalness principle. I argue that the naturalness principle is well-grounded both empirically and in the theoretical structure of effective field theories, and that it was reasonable for physicists to endorse it. Its possible failure to be realized in nature, as suggested by recent LHC data, thus represents an empirical challenge to certain foundational aspects of our understanding of QFT. In particular, I argue that its failure would undermine one class of recent proposals which claim that QFT provides us with a picture of the world as being structured into quasi-autonomous physical domains.  相似文献   

9.
In this paper, I compare theory-laden perceptions with imputed data sets. The similarities between the two allow me to show how the phenomenon of theory-ladenness can manifest itself in statistical analyses. More importantly, elucidating the differences between them will allow me to broaden the focus of the existing literature on theory-ladenness and to introduce some much-needed nuances. The topic of statistical imputation has received no attention in philosophy of science. Yet, imputed data sets are very similar to theory-laden perceptions, and they are now an integral part of many scientific inferences. Unlike the existence of theory-laden perceptions, that of imputed data sets cannot be challenged or reduced to a manageable source of error. In fact, imputed data sets are created purposefully in order to improve the quality of our inferences. They do not undermine the possibility of scientific knowledge; on the contrary, they are epistemically desirable.  相似文献   

10.
In this paper I draw the distinction between intuitive and theory-relative accounts of the time reversal symmetry and identify problems with each. I then propose an alternative to these two types of accounts that steers a middle course between them and minimizes each account׳s problems. This new account of time reversal requires that, when dealing with sets of physical theories that satisfy certain constraints, we determine all of the discrete symmetries of the physical laws we are interested in and look for involutions that leave spatial coordinates unaffected and that act consistently across our physical laws. This new account of time reversal has the interesting feature that it makes the nature of the time reversal symmetry an empirical feature of the world without requiring us to assume that any particular physical theory is time reversal invariant from the start. Finally, I provide an analysis of several toy cases that reveals differences between my new account of time reversal and its competitors.  相似文献   

11.
采用T700/QY8911预浸料热压成型制作了复合材料圆柱管,对其进行了轴向压缩下能量吸收性能的试验。通过试验得到了峰值载荷、平均压溃载荷和比吸能率等主要能量吸收性能参数。试验结果表明,能量吸收性能及压溃破坏模式不仅与碳纤维材料性能有关,还与成型工艺及端部减薄这一薄弱环节设置密切相关;端部减薄薄弱环节的设置能有效地降低圆柱管在压溃过程中的峰值载荷,渐进地引发圆柱管压溃失效。建立了复合材料圆管轴向渐进损伤压缩有限元模型,通过计算得到了能量吸收性能的重要参数,计算结果与试验结果比较吻合说明了所采用的模拟方法是可行的。  相似文献   

12.
This paper provides an account of mid-level models which calibrate highly theoretical agent-based models of scientific communities by incorporating empirical information from real-world systems. As a result, these models more closely correspond with real-world communities, and are better suited for informing policy decisions than extant how-possibly models. I provide an exemplar of a mid-level model of science funding allocation that incorporates bibliometric data from scientific publications and data generated from empirical studies of peer review into an epistemic landscape model. The results of my model show that on a dynamic epistemic landscape, allocating funding by modified and pure lottery strategies performs comparably to a perfect selection funding allocation strategy. These results support the idea that introducing randomness into a funding allocation process may be a tractable policy worth exploring further through pilot studies. My exemplar shows that agent-based models need not be restricted to the abstract and the apriori; they can also be informed by empirical data.  相似文献   

13.
In this paper we focus on the effect of (i) deleting, (ii) restricting or (iii) not restricting seasonal intercept terms on forecasting sets of seasonally cointegrated macroeconomic time series for Austria, Germany and the UK. A first empirical result is that the number of cointegrating vectors as well as the relevant estimated parameter values vary across the three models. A second result is that the quality of out-of-sample forecasts critically depends on the way seasonal constants are treated. In most cases, predictive performance can be improved by restricting the effects of seasonal constants. However, we find that the relative advantages and disadvantages of each of the three methods vary across the data sets and may depend on sample-specific features. © 1998 John Wiley & Sons, Ltd.  相似文献   

14.
In this paper I argue that the case of Einstein׳s special relativity vs. Hendrik Lorentz׳s ether theory can be decided in terms of empirical evidence, in spite of the predictive equivalence between the theories. In the historical and philosophical literature this case has been typically addressed focusing on non-empirical features (non-empirical virtues in special relativity and/or non-empirical flaws in the ether theory). I claim that non-empirical features are not enough to provide a fully objective and uniquely determined choice in instances of empirical equivalence. However, I argue that if we consider arguments proposed by Richard Boyd, and by Larry Laudan and Jarret Leplin, a choice based on non-entailed empirical evidence favoring Einstein׳s theory can be made.  相似文献   

15.
基于Q460高强角钢两端偏心受压试验,给出了高强角钢截面残余应力分布和初始几何缺陷,并结合多种设计方法对试验结果进行了分析计算、讨论和比较,对高强角钢两端偏心受压构件整体稳定承载力进行了研究。结果表明,现行杆塔结构设计采用《美国输电铁塔设计导则》(ASCEIO-1997)、《架空送电线路杆塔结构设计技术规定》(DI/T5154-2002)和我国《钢结构设计规范》(GB50017-2003)进行设计是偏保守的。基于平行轴为准按轴心受压构件计算有效长细比,并选用a类截面柱子曲线,而提出实用设计公式,建议公式计算结果和试验结果吻合较好,该方法可供工程设计时参考。  相似文献   

16.
This paper examines the history of two related problems concerning earthquakes, and the way in which a theoretical advance was involved in their resolution. The first problem is the development of a physical, as opposed to empirical, scale for measuring the size of earthquakes. The second problem is that of understanding what happens at the source of an earthquake. There was a controversy about what the proper model for the seismic source mechanism is, which was finally resolved through advances in the theory of elastic dislocations. These two problems are linked, because the development of a physically-based magnitude scale requires an understanding of what goes on at the seismic source. I will show how the theoretical advances allowed seismologists to re-frame the questions they were trying to answer, so that the data they gathered could be brought to bear on the problem of seismic sources in new ways.  相似文献   

17.
In an attempt to determine the epistemic status of computer simulation results, philosophers of science have recently explored the similarities and differences between computer simulations and experiments. One question that arises is whether and, if so, when, simulation results constitute novel empirical data. It is often supposed that computer simulation results could never be empirical or novel because simulations never interact with their targets, and cannot go beyond their programming. This paper argues against this position by examining whether, and under what conditions, the features of empiricality and novelty could be displayed by computer simulation data. I show that, to the extent that certain familiar measurement results have these features, so can some computer simulation results.  相似文献   

18.
More and more ensemble models are used to forecast business failure. It is generally known that the performance of an ensemble relies heavily on the diversity between each base classifier. To achieve diversity, this study uses kernel‐based fuzzy c‐means (KFCM) to organize firm samples and designs a hierarchical selective ensemble model for business failure prediction (BFP). First, three KFCM methods—Gaussian KFCM (GFCM), polynomial KFCM (PFCM), and Hyper‐tangent KFCM (HFCM)—are employed to partition the financial data set into three data sets. A neural network (NN) is then adopted as a basis classifier for BFP, and three sets, which are derived from three KFCM methods, are used to build three classifier pools. Next, classifiers are fused by the two‐layer hierarchical selective ensemble method. In the first layer, classifiers are ranked based on their prediction accuracy. The stepwise forward selection method is employed to selectively integrate classifiers according to their accuracy. In the second layer, three selective ensembles in the first layer are integrated again to acquire the final verdict. This study employs financial data from Chinese listed companies to conduct empirical research, and makes a comparative analysis with other ensemble models and all its component models. It is the conclusion that the two‐layer hierarchical selective ensemble is good at forecasting business failure.  相似文献   

19.
Bogen and Woodward's distinction between data and phenomena raises the need to understand the structure of the data-to-phenomena and theory-to-phenomena inferences. I suggest that one way to study the structure of these inferences is to analyze the role of the assumptions involved in the inferences: What kind of assumptions are they? How do these assumptions contribute to the practice of identifying phenomena? In this paper, using examples from atmospheric dynamics, I develop an account of the practice of identifying the target in the data-to-phenomena and theory-to-phenomena inferences in which assumptions about spatiotemporal scales play a central role in the identification of parameters that describe the target system. I also argue that these assumptions are not only empirical but they are also idealizing and abstracting. I conclude the paper with a reflection on the role of idealizations in modeling.  相似文献   

20.
Bruno Latour claims to have shown that a Kantian model of knowledge, which he describes as seeking to unite a disembodied transcendental subject with an inaccessible thing-in-itself, is dramatically falsified by empirical studies of science in action. Instead, Latour puts central emphasis on scientific practice, and replaces this Kantian model with a model of “circulating reference.” Unfortunately, Latour's alternative schematic leaves out the scientific subject. I repair this oversight through a simple mechanical procedure. By putting a slight spin on Latour's diagrammatic representation of his theory, I discover a new space for a post-Kantian scientific subject, a subject brilliantly described by Ludwik Fleck. The neglected subjectivities and ceaseless practices of science are thus re-united.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号