首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
2.
机器学习面临的挑战   总被引:1,自引:0,他引:1  
该文讨论了机器学习目前面临的几个挑战,包括:高维特征空间和数据量问题,大数据量的计算困难,寻求最优解的困难和可解释性差等问题.然后针对当前很多人关心的几个重要问题,例如大数据问题,深度学习,概率图模型等做了分析,以引起深入思考.  相似文献   

3.
The traditional use of ergodic theory in the foundations of equilibrium statistical mechanics is that it provides a link between thermodynamic observables and microcanonical probabilities. First of all, the ergodic theorem demonstrates the equality of microcanonical phase averages and infinite time averages (albeit for a special class of systems, and up to a measure zero set of exceptions). Secondly, one argues that actual measurements of thermodynamic quantities yield time averaged quantities, since measurements take a long time. The combination of these two points is held to be an explanation why calculating microcanonical phase averages is a successful algorithm for predicting the values of thermodynamic observables. It is also well known that this account is problematic.This survey intends to show that ergodic theory nevertheless may have important roles to play, and it explores three other uses of ergodic theory. Particular attention is paid, firstly, to the relevance of specific interpretations of probability, and secondly, to the way in which the concern with systems in thermal equilibrium is translated into probabilistic language. With respect to the latter point, it is argued that equilibrium should not be represented as a stationary probability distribution as is standardly done; instead, a weaker definition is presented.  相似文献   

4.
When considering controversial thermodynamic scenarios such as Maxwell's demon, it is often necessary to consider probabilistic mixtures of macrostates. This raises the question of how, if at all, to assign entropy to them. The information-theoretic entropy is often used in such cases; however, no general proof of the soundness of doing so has been given, and indeed some arguments against doing so have been presented. We offer a general proof of the applicability of the information-theoretic entropy to probabilistic mixtures of macrostates that is based upon a probabilistic generalisation of the Kelvin statement of the second law. We defend the latter and make clear the other assumptions on which our main result depends. We also briefly discuss the interpretation of our result.  相似文献   

5.
We start by reviewing the complicated situation in methods of scientific attribution of climate change to extreme weather events. We emphasize the social values involved in using both so-called ″storyline″ and ordinary probabilistic or ″risk-based″ methods, noting that one important virtue claimed by the storyline approach is that it features a reduction in false negative results, which has much social and ethical merit, according to its advocates. This merit is critiqued by the probabilistic, risk-based, opponents, who claim the high ground; the usual probabilistic approach is claimed to be more objective and more ″scientific″, under the grounds that it reduces false positive error. We examine this mostly-implicit debate about error, which apparently mirrors the old Jeffrey-Rudner debate. We also argue that there is an overlooked component to the role of values in science: that of second-order inductive risk, and that it makes the relative role of values in the two methods different from what it first appears to be. In fact, neither method helps us to escape social values, and be more scientifically ″objective″ in the sense of being removed or detached from human values and interests. The probabilistic approach does not succeed in doing so, contrary to the claims of its proponents. This is important to understand, because neither method is, fundamentally, a successful strategy for climate scientists to avoid making value judgments.  相似文献   

6.
Everettian accounts of quantum mechanics entail that people branch; every possible result of a measurement actually occurs, and I have one successor for each result. Is there room for probability in such an account? The prima facie answer is no; there are no ontic chances here, and no ignorance about what will happen. But since any adequate quantum mechanical theory must make probabilistic predictions, much recent philosophical labor has gone into trying to construct an account of probability for branching selves. One popular strategy involves arguing that branching selves introduce a new kind of subjective uncertainty. I argue here that the variants of this strategy in the literature all fail, either because the uncertainty is spurious, or because it is in the wrong place to yield probabilistic predictions. I conclude that uncertainty cannot be the ground for probability in Everettian quantum mechanics.  相似文献   

7.
The difficulty in modelling inflation and the significance in discovering the underlying data‐generating process of inflation is expressed in an extensive literature regarding inflation forecasting. In this paper we evaluate nonlinear machine learning and econometric methodologies in forecasting US inflation based on autoregressive and structural models of the term structure. We employ two nonlinear methodologies: the econometric least absolute shrinkage and selection operator (LASSO) and the machine‐learning support vector regression (SVR) method. The SVR has never been used before in inflation forecasting considering the term spread as a regressor. In doing so, we use a long monthly dataset spanning the period 1871:1–2015:3 that covers the entire history of inflation in the US economy. For comparison purposes we also use ordinary least squares regression models as a benchmark. In order to evaluate the contribution of the term spread in inflation forecasting in different time periods, we measure the out‐of‐sample forecasting performance of all models using rolling window regressions. Considering various forecasting horizons, the empirical evidence suggests that the structural models do not outperform the autoregressive ones, regardless of the model's method. Thus we conclude that the term spread models are not more accurate than autoregressive models in inflation forecasting. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

8.
One finds, in Maxwell's writings on thermodynamics and statistical physics, a conception of the nature of these subjects that differs in interesting ways from the way they are usually conceived. In particular, though—in agreement with the currently accepted view—Maxwell maintains that the second law of thermodynamics, as originally conceived, cannot be strictly true, the replacement he proposes is different from the version accepted by most physicists today. The modification of the second law accepted by most physicists is a probabilistic one: although statistical fluctuations will result in occasional spontaneous differences in temperature or pressure, there is no way to predictably and reliably harness these to produce large violations of the original version of the second law. Maxwell advocates a version of the second law that is strictly weaker; the validity of even this probabilistic version is of limited scope, limited to situations in which we are dealing with large numbers of molecules en masse and have no ability to manipulate individual molecules. Connected with this is his conception of the thermodynamic concepts of heat, work, and entropy; on the Maxwellian view, these are concept that must be relativized to the means we have available for gathering information about and manipulating physical systems. The Maxwellian view is one that deserves serious consideration in discussions of the foundation of statistical mechanics. It has relevance for the project of recovering thermodynamics from statistical mechanics because, in such a project, it matters which version of the second law we are trying to recover.  相似文献   

9.
The long history of ergodic and quasi-ergodic hypotheses provides the best example of the attempt to supply non-probabilistic justifications for the use of statistical mechanics in describing mechanical systems. In this paper we reverse the terms of the problem. We aim to show that accepting a probabilistic foundation of elementary particle statistics dispenses with the need to resort to ambiguous non-probabilistic notions like that of (in)distinguishability. In the quantum case, starting from suitable probability conditions, it is possible to deduce elementary particle statistics in a unified way. Following our approach Maxwell-Boltzmann statistics can also be deduced, and this deduction clarifies its status.Thus our primary aim in this paper is to give a mathematically rigorous deduction of the probability of a state with given energy for a perfect gas in statistical equilibrium; that is, a deduction of the equilibrium distribution for a perfect gas. A crucial step in this deduction is the statement of a unified statistical theory based on clearly formulated probability conditions from which the particle statistics follows. We believe that such a deduction represents an important improvement in elementary particle statistics, and a step towards a probabilistic foundation of statistical mechanics.In this Part I we first present some history: we recall some results of Boltzmann and Brillouin that go in the direction we will follow. Then we present a number of probability results we shall use in Part II. Finally, we state a notion of entropy referring to probability distributions, and give a natural solution to Gibbs' paradox.  相似文献   

10.
With the development of artificial intelligence, deep learning is widely used in the field of nonlinear time series forecasting. It is proved in practice that deep learning models have higher forecasting accuracy compared with traditional linear econometric models and machine learning models. With the purpose of further improving forecasting accuracy of financial time series, we propose the WT-FCD-MLGRU model, which is the combination of wavelet transform, filter cycle decomposition and multilag neural networks. Four major stock indices are chosen to test the forecasting performance among traditional econometric model, machine learning model and deep learning models. According to the result of empirical analysis, deep learning models perform better than traditional econometric model such as autoregressive integrated moving average and improved machine learning model SVR. Besides, our proposed model has the minimum forecasting error in stock index prediction.  相似文献   

11.
In this paper, we forecast EU area inflation with many predictors using time‐varying parameter models. The facts that time‐varying parameter models are parameter rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time‐varying parameter models. Our approach allows for the coefficient on each predictor to be: (i) time varying; (ii) constant over time; or (iii) shrunk to zero. The econometric methodology decides automatically to which category each coefficient belongs. Our empirical results indicate the benefits of such an approach. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

12.
In a cloud environment virtual machines are created with different purposes, such as providing users with computers or handling web traffic. A virtual machine is created in such a way that a user will not notice any difference from working on a physical computer. A challenging problem in cloud computing is how to distribute the virtual machines on a set of physical servers. An optimal solution will provide each virtual machine with enough resources and at the same time not using more physical services (energy/electricity) than necessary to achieve this. In this paper we investigate how forecasting of future resource requirements (CPU consumption) for each virtual machine can be used to improve the virtual machine placement on the physical servers. We demonstrate that a time‐dependent hidden Markov model with an autoregressive observation process replicates the properties of the CPU consumption data in a realistic way and forecasts future CPU consumption efficiently. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

13.
In this paper we will try to explain how Leibniz justified the idea of an exact arithmetical quadrature. We will do this by comparing Leibniz's exposition with that of John Wallis. In short, we will show that the idea of exactitude in matters of quadratures relies on two fundamental requisites that, according to Leibniz, the infinite series have, namely, that of regularity and that of completeness. In the first part of this paper, we will go deeper into three main features of Leibniz's method, that is: it is an infinitesimal method, it looks for an arithmetical quadrature and it proposes a result that is not approximate, but exact. After that, we will deal with the requisite of the regularity of the series, pointing out that, unlike the inductive method proposed by Wallis, Leibniz propounded some sort of intellectual recognition of what is invariant in the series. Finally, we will consider the requisite of completeness of the series. We will see that, although both Wallis and Leibniz introduced the supposition of completeness, the German thinker went beyond the English mathematician, since he recognized that it is not necessary to look for a number for the quadrature of the circle, given that we have a series that is equal to the area of that curvilinear figure.  相似文献   

14.
We often rely on symmetries to infer outcomes’ probabilities, as when we infer that each side of a fair coin is equally likely to come up on a given toss. Why are these inferences successful? I argue against answering this question with an a priori indifference principle. Reasons to reject such a principle are familiar, yet instructive. They point to a new, empirical explanation for the success of our probabilistic predictions. This has implications for indifference reasoning generally. I argue that a priori symmetries need never constrain our probability attributions, even for initial credences.  相似文献   

15.
We discuss some aspects of the relation between dualities and gauge symmetries. Both of these ideas are of course multi-faceted, and we confine ourselves to making two points. Both points are about dualities in string theory, and both have the ‘flavour’ that two dual theories are ‘closer in content’ than you might think. For both points, we adopt a simple conception of a duality as an ‘isomorphism’ between theories: more precisely, as appropriate bijections between the two theories’ sets of states and sets of quantities.The first point (Section 3) is that this conception of duality meshes with two dual theories being ‘gauge related’ in the general philosophical sense of being physically equivalent. For a string duality, such as T-duality and gauge/gravity duality, this means taking such features as the radius of a compact dimension, and the dimensionality of spacetime, to be ‘gauge’.The second point (4 Gauge/gravity duality, 5 Some complications for gauge invariance, 6 Galileo׳s ship, (Local)) is much more specific. We give a result about gauge/gravity duality that shows its relation to gauge symmetries (in the physical sense of symmetry transformations that are spacetime-dependent) to be subtler than you might expect. For gauge theories, you might expect that the duality bijections relate only gauge-invariant quantities and states, in the sense that gauge symmetries in one theory will be unrelated to any symmetries in the other theory. This may be so in general; and indeed, it is suggested by discussions of Polchinski and Horowitz. But we show that in gauge/gravity duality, each of a certain class of gauge symmetries in the gravity/bulk theory, viz. diffeomorphisms, is related by the duality to a position-dependent symmetry of the gauge/boundary theory.  相似文献   

16.
According to a traditional view, scientific laws and theories constitute algorithmic compressions of empirical data sets collected from observations and measurements. This article defends the thesis that, to the contrary, empirical data sets are algorithmically incompressible. The reason is that individual data points are determined partly by perturbations, or causal factors that cannot be reduced to any pattern. If empirical data sets are incompressible, then they exhibit maximal algorithmic complexity, maximal entropy and zero redundancy. They are therefore maximally efficient carriers of information about the world. Since, on algorithmic information theory, a string is algorithmically random just if it is incompressible, the thesis entails that empirical data sets consist of algorithmically random strings of digits. Rather than constituting compressions of empirical data, scientific laws and theories pick out patterns that data sets exhibit with a certain noise.  相似文献   

17.
Given a nonlinear model, a probabilistic forecast may be obtained by Monte Carlo simulations. At a given forecast horizon, Monte Carlo simulations yield sets of discrete forecasts, which can be converted to density forecasts. The resulting density forecasts will inevitably be downgraded by model misspecification. In order to enhance the quality of the density forecasts, one can mix them with the unconditional density. This paper examines the value of combining conditional density forecasts with the unconditional density. The findings have positive implications for issuing early warnings in different disciplines including economics and meteorology, but UK inflation forecasts are considered as an example. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

18.
The Copenhagen interpretation of quantum mechanics is the dominant view of the theory among working physicists, if not philosophers. There are, however, several strains of Copenhagenism extant, each largely accepting Born's assessment of the wave function as the most complete possible specification of a system and the notion of collapse as a completely random event. This paper outlines three of these sub-interpretations, typing them by what the author of each names as the trigger of quantum-mechanical collapse. Visions of the theory from von Neumann, Heisenberg, and Wheeler offer different mechanisms to break the continuous, deterministic, superposition-laden quantum chain and yield discrete, probabilistic, classical results in response to von Neumann's catastrophe of infinite regress.  相似文献   

19.
In what sense are associations between particular markers and complex behaviors made by genome-wide association studies (GWAS) and related techniques discoveries of, or entries into the study of, the causes of those behaviors? In this paper, we argue that when applied to individuals, the kinds of probabilistic ‘causes’ of complex traits that GWAS-style studies can point towards do not provide the kind of causal information that is useful for generating explanations; they do not, in other words, point towards useful explanations of why particular individuals have the traits that they do. We develop an analogy centered around Galton's “Quincunx” machine; while each pin might be associated with outcomes of a certain sort, in any particular trial, that pin might be entirely bypassed even if the ball eventually comes to rest in the box most strongly associated with that pin. Indeed, in any particular trial, the actual outcome of a ball hitting a pin might be the opposite of what is usually expected. While we might find particular pins associated with outcomes in the aggregate, these associations will not provide causally relevant information for understanding individual outcomes. In a similar way, the complexities of development likely render impossible any moves from population-level statistical associations between genetic markers and complex behaviors to an understanding of the causal processes by which individuals come to have the traits that they in fact have.  相似文献   

20.
This paper provides an account of mid-level models which calibrate highly theoretical agent-based models of scientific communities by incorporating empirical information from real-world systems. As a result, these models more closely correspond with real-world communities, and are better suited for informing policy decisions than extant how-possibly models. I provide an exemplar of a mid-level model of science funding allocation that incorporates bibliometric data from scientific publications and data generated from empirical studies of peer review into an epistemic landscape model. The results of my model show that on a dynamic epistemic landscape, allocating funding by modified and pure lottery strategies performs comparably to a perfect selection funding allocation strategy. These results support the idea that introducing randomness into a funding allocation process may be a tractable policy worth exploring further through pilot studies. My exemplar shows that agent-based models need not be restricted to the abstract and the apriori; they can also be informed by empirical data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号