首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Chomsky’s principle of epistemological tolerance says that in theoretical linguistics contradictions between the data and the hypotheses may be temporarily tolerated in order to protect the explanatory power of the theory. The paper raises the following problem: What kinds of contradictions may be tolerated between the data and the hypotheses in theoretical linguistics? First a model of paraconsistent logic is introduced which differentiates between week and strong contradiction. As a second step, a case study is carried out which exemplifies that the principle of epistemological tolerance may be interpreted as the tolerance of week contradiction. The third step of the argumentation focuses on another case study which exemplifies that the principle of epistemological tolerance must not be interpreted as the tolerance of strong contradiction. The reason for the latter insight is the unreliability and the uncertainty of introspective data. From this finding the author draws the conclusion that it is the integration of different data types that may lead to the improvement of current theoretical linguistics and that the integration of different data types requires a novel methodology which, for the time being, is not available.  相似文献   

3.
Abductive reasoning takes place in forming``hypotheses' in order to explain ``facts.' Thus, theconcept of abduction promises an understanding ofcreativity in science and learning. It raises,however, also a lot of problems. Some of them will bediscussed in this paper. After analyzing thedifference between induction and abduction (1), Ishall discuss Peirce's claim that there is a ``logic'of abduction (2). The thesis is that this claim can beunderstood, if we make a clear distinction betweeninferential elements and perceptive elements ofabductive reasoning. For Peirce, the creative act offorming explanatory hypotheses and the emergence of``new ideas' belongs exclusively to the perceptive sideof abduction. Thus, it is necessary to study the roleof perception in abductive reasoning (3). A furtherproblem is the question whether there is arelationship between abduction and Peirce's concept of``theorematic reasoning' in mathematics (4). Both formsof reasoning could be connected, because both arebased on perception. The last problem concerns therole of instincts in explaining the success ofabductive reasoning in science, and the questionwhether the concept of instinct might be replaced bymethods of inquiry (5).  相似文献   

4.
This paper was written with two aims in mind. A large part of it is just an exposition of Tarski's theory of truth. Philosophers do not agree on how Tarski's theory is related to their investigations. Some of them doubt whether that theory has any relevance to philosophical issues and in particular whether it can be applied in dealing with the problems of philosophy (theory) of science.In this paper I argue that Tarski's chief concern was the following question. Suppose a language L belongs to the class of languages for which, in full accordance with some formal conditions set in advance, we are able to define the class of all the semantic interpretations the language may acquire. Every interpretation of L can be viewed as a certain structure to which the expressions of the language may refer. Suppose that a specific interpretation of the language L was singled out as the intended one. Suppose, moreover, that the intended interpretation can be characterized in a metalanguage L +. If the above assumptions are satisfied, can the notion of truth for L be defined in the metalanguage L + and, if it can, how can this be done?  相似文献   

5.
In recent controversies about Intelligent Design Creationism (IDC), the principle of methodological naturalism (MN) has played an important role. In this paper, an often neglected distinction is made between two different conceptions of MN, each with its respective rationale and with a different view on the proper role of MN in science. According to one popular conception, MN is a self-imposed or intrinsic limitation of science, which means that science is simply not equipped to deal with claims of the supernatural (Intrinsic MN or IMN). Alternatively, we will defend MN as a provisory and empirically grounded attitude of scientists, which is justified in virtue of the consistent success of naturalistic explanations and the lack of success of supernatural explanations in the history of science (Provisory MN or PMN). Science does have a bearing on supernatural hypotheses, and its verdict is uniformly negative. We will discuss five arguments that have been proposed in support of IMN: the argument from the definition of science, the argument from lawful regularity, the science stopper argument, the argument from procedural necessity, and the testability argument. We conclude that IMN, because of its philosophical flaws, proves to be an ill-advised strategy to counter the claims of IDC. Evolutionary scientists are on firmer ground if they discard supernatural explanations on purely evidential grounds, instead of ruling them out by philosophical fiat.  相似文献   

6.
We put forward a possible new interpretation and explanatory framework for quantum theory. The basic hypothesis underlying this new framework is that quantum particles are conceptual entities. More concretely, we propose that quantum particles interact with ordinary matter, nuclei, atoms, molecules, macroscopic material entities, measuring apparatuses,  in a similar way to how human concepts interact with memory structures, human minds or artificial memories. We analyze the most characteristic aspects of quantum theory, i.e. entanglement and non-locality, interference and superposition, identity and individuality in the light of this new interpretation, and we put forward a specific explanation and understanding of these aspects. The basic hypothesis of our framework gives rise in a natural way to a Heisenberg uncertainty principle which introduces an understanding of the general situation of ‘the one and the many’ in quantum physics. A specific view on macro and micro different from the common one follows from the basic hypothesis and leads to an analysis of Schrödinger’s Cat paradox and the measurement problem different from the existing ones. We reflect about the influence of this new quantum interpretation and explanatory framework on the global nature and evolutionary aspects of the world and human worldviews, and point out potential explanations for specific situations, such as the generation problem in particle physics, the confinement of quarks and the existence of dark matter.  相似文献   

7.
There has been little research into the weak kindsof negating hypotheses. Hypotheses may be unfalsifiable. In this case it is impossible tofind a contradiction in some area of the conceptualsystems in which they are incorporated.Notwithstanding this fact, it is sometimes necessaryto construct ways of rejecting the unfalsifiablehypothesis at hand by resorting to some external forms of negation, external because wewant to avoid any arbitrary and subjectiveelimination, which would be rationally orepistemologically unjustified. I will consider akind of ``weak' (unfalsifiable) hypotheses that arehard to negate and the ways for making it easy. Inthese cases the subject can ``rationally' decide towithdraw his hypotheses even in contexts where it is``impossible' to find ``explicit' contradictions: theuse of negation as failure (an interestingtechnique for negating hypotheses and accessing newones suggested by artificial intelligence) isilluminating. I plan to explore whether this kind ofnegation can be employed to model hypothesiswithdrawal in Poincaré's conventionalism of theprinciples of physics and in Freudian analyticreasoning.  相似文献   

8.
9.
The set of k points that optimally represent a distribution in terms of mean squared error have been called principal points (Flury 1990). Principal points are a special case of self-consistent points. Any given set of k distinct points in R p induce a partition of R p into Voronoi regions or domains of attraction according to minimal distance. A set of k points are called self-consistent for a distribution if each point equals the conditional mean of the distribution over its respective Voronoi region. For symmetric multivariate distributions, sets of self-consistent points typically form symmetric patterns. This paper investigates the optimality of different symmetric patterns of self-consistent points for symmetric multivariate distributions and in particular for the bivariate normal distribution. These results are applied to the problem of estimating principal points.  相似文献   

10.
In spite of its success, Neo-Darwinism is faced with major conceptual barriers to further progress, deriving directly from its metaphysical foundations. Most importantly, neo-Darwinism fails to recognize a fundamental cause of evolutionary change, “niche construction”. This failure restricts the generality of evolutionary theory, and introduces inaccuracies. It also hinders the integration of evolutionary biology with neighbouring disciplines, including ecosystem ecology, developmental biology, and the human sciences. Ecology is forced to become a divided discipline, developmental biology is stubbornly difficult to reconcile with evolutionary theory, and the majority of biologists and social scientists are still unhappy with evolutionary accounts of human behaviour. The incorporation of niche construction as both a cause and a product of evolution removes these disciplinary boundaries while greatly generalizing the explanatory power of evolutionary theory.
Kevin N. LalandEmail:
  相似文献   

11.
Givenk rooted binary treesA 1, A2, ..., Ak, with labeled leaves, we generateC, a unique system of lineage constraints on common ancestors. We then present an algorithm for constructing the set of rooted binary treesB, compatible with all ofA 1, A2, ..., Ak. The running time to obtain one such supertree isO(k 2 n2), wheren is the number of distinct leaves in all of the treesA 1, A2, ..., Ak.  相似文献   

12.
Efficient algorithms for agglomerative hierarchical clustering methods   总被引:11,自引:4,他引:7  
Whenevern objects are characterized by a matrix of pairwise dissimilarities, they may be clustered by any of a number of sequential, agglomerative, hierarchical, nonoverlapping (SAHN) clustering methods. These SAHN clustering methods are defined by a paradigmatic algorithm that usually requires 0(n 3) time, in the worst case, to cluster the objects. An improved algorithm (Anderberg 1973), while still requiring 0(n 3) worst-case time, can reasonably be expected to exhibit 0(n 2) expected behavior. By contrast, we describe a SAHN clustering algorithm that requires 0(n 2 logn) time in the worst case. When SAHN clustering methods exhibit reasonable space distortion properties, further improvements are possible. We adapt a SAHN clustering algorithm, based on the efficient construction of nearest neighbor chains, to obtain a reasonably general SAHN clustering algorithm that requires in the worst case 0(n 2) time and space.Whenevern objects are characterized byk-tuples of real numbers, they may be clustered by any of a family of centroid SAHN clustering methods. These methods are based on a geometric model in which clusters are represented by points ink-dimensional real space and points being agglomerated are replaced by a single (centroid) point. For this model, we have solved a class of special packing problems involving point-symmetric convex objects and have exploited it to design an efficient centroid clustering algorithm. Specifically, we describe a centroid SAHN clustering algorithm that requires 0(n 2) time, in the worst case, for fixedk and for a family of dissimilarity measures including the Manhattan, Euclidean, Chebychev and all other Minkowski metrics.This work was partially supported by the Natural Sciences and Engineering Research Council of Canada and by the Austrian Fonds zur Förderung der wissenschaftlichen Forschung.  相似文献   

13.
Proportional link linkage (PLL) clustering methods are a parametric family of monotone invariant agglomerative hierarchical clustering methods. This family includes the single, minimedian, and complete linkage clustering methods as special cases; its members are used in psychological and ecological applications. Since the literature on clustering space distortion is oriented to quantitative input data, we adapt its basic concepts to input data with only ordinal significance and analyze the space distortion properties of PLL methods. To enable PLL methods to be used when the numbern of objects being clustered is large, we describe an efficient PLL algorithm that operates inO(n 2 logn) time andO(n 2) space.This work was partially supported by the Natural Sciences and Engineering Research Council of Canada and by the Austrian Fonds zur Förderung der wissenschaftlichen Forschung.  相似文献   

14.
Several methods have recently been introduced for investigating relations between three interpoint proximity matricesA, B, C, each of which furnishes a different type of distance between the same objects. Smouse, Long, and Sokal (1986) investigate the partial correlation betweenA andB conditional onC. Dow and Cheverud (1985) ask whethercorr (A, C), equalscorr (B, C). Manly (1986) investigates regression-like models for predicting one matrix as a function of others. We have investigated rejection rates of these methods when their null hypotheses are true, but data are spatially autocorrelated (SA). That is,A, andB are distance matrices from independent realizations of the same SA generating process, andC is a matrix of geographic connections. SA causes all the models to be liberal because the hypothesis of equally likely row/column permutations invoked, by all these methods, is untrue when data are SA. Consequently, we cannot unreservedly recommend the use of any of these methods with SA data. However, if SA is weak, the Smouse-Long-Sokal method, used with a conservative critical value, is unlikely to reject falsely.  相似文献   

15.
With scale relativity theory, Laurent Nottale has provided a powerful conceptual and mathematical framework with numerous validated predictions that has fundamental implications and applications for all sciences. We discuss how this extended framework reviewed in Nottale (Found Sci 152 (3):101–152, 2010a) may help facilitating integration across multiple size and time frames in systems biology, and the development of a scale relative biology with increased explanatory power.  相似文献   

16.
What I call theoretical abduction (sentential and model-based)certainly illustrates much of what is important in abductive reasoning, especially the objective of selecting and creating a set of hypotheses that are able to dispense good (preferred) explanations of data, but fails to account for many cases of explanation occurring in science or in everyday reasoning when the exploitation of the environment is crucial. The concept of manipulative abduction is devoted to capture the role of action in many interesting situations: action provides otherwise unavailable information that enables the agent to solve problems by starting and performing a suitable abductive process of generation or selection of hypotheses. Many external things, usually inert from the epistemological point of view, can be transformed into what I callepistemic mediators, which are illustrated in the last part of the paper, together with an analysis of the related notions of ``perceptual and inceptual rehearsal' and of ``external representation'.  相似文献   

17.
Much of the focus on Poincaré’s philosophy of science has been on the notion of convention, a crucial concept that has become distinctive of his position. However, other notions have received much less attention. That is the case of verifiable hypotheses. This kind of hypotheses seems to be constituted from the generalization of several observable facts. So, in order to understand what these hypotheses are, we need to know what a fact to Poincaré is. He divides facts into brute and scientific facts. The characterization of this duality is not trivial at all, and leads us to the following questions that we will discuss in this paper: (1) which the part of construction that exists in a scientific fact and which the part of translation, that is, what remains from the brute fact in the scientific one?; and (2) when we conceive a generalized hypothesis, are we supposed to do it from scientific or from brute facts? The clarification of these questions could lead to distinguish the part of construction and the part of translation in the first steps of science, which is essential to get a better understanding of Poincaré’s conception of science.  相似文献   

18.
在心灵哲学里,物理主义一直信守着"物理知识具有完备性"这样一条知识论原则(完备性原则)。近年来,反物理主义试图借助解释鸿沟难题来攻击完备性原则,物理主义的最新回应则是采取一种现象概念策略。本文选介的是一种最小物理主义方案,我将论证,尽管不同的物理主义者对现象概念策略的理解不同,然而最小物理主义方案不但可以获得坚持现象概念策略的物理主义者的普遍认可,而且能够有效化解反物理主义关于完备性原则的攻击。  相似文献   

19.
Clustering with a criterion which minimizes the sum of squared distances to cluster centroids is usually done in a heuristic way. An exact polynomial algorithm, with a complexity in O(N p+1 logN), is proposed for minimum sum of squares hierarchical divisive clustering of points in a p-dimensional space with small p. Empirical complexity is one order of magnitude lower. Data sets with N = 20000 for p = 2, N = 1000 for p = 3, and N = 200 for p = 4 are clustered in a reasonable computing time.  相似文献   

20.
Data holders, such as statistical institutions and financial organizations, have a very serious and demanding task when producing data for official and public use. It’s about controlling the risk of identity disclosure and protecting sensitive information when they communicate data-sets among themselves, to governmental agencies and to the public. One of the techniques applied is that of micro-aggregation. In a Bayesian setting, micro-aggregation can be viewed as the optimal partitioning of the original data-set based on the minimization of an appropriate measure of discrepancy, or distance, between two posterior distributions, one of which is conditional on the original data-set and the other conditional on the aggregated data-set. Assuming d-variate normal data-sets and using several measures of discrepancy, it is shown that the asymptotically optimal equal probability m-partition of , with m 1/d ∈ , is the convex one which is provided by hypercubes whose sides are formed by hyperplanes perpendicular to the canonical axes, no matter which discrepancy measure has been used. On the basis of the above result, a method that produces a sub-optimal partition with a very small computational cost is presented. Published online xx, xx, xxxx.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号