首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In 1918, H. Weyl proposed a unified theory of gravity and electromagnetism based on a generalization of Riemannian geometry. In spite of its elegance and beauty, a serious objection was raised by Einstein, who argued that Weyl’s theory was not suitable as a physical theory . According to Einstein, the theory led to the prediction of a “second clock effect”, which has not been observed by experiments as yet. We briefly revisit this point and argue that a preliminary discussion on the very notion of proper time is needed in order to consider Einstein’s critical point of view. We also point out that Weyl theory is basically incomplete in its original version and its completion may lead to a rich and interesting new approach to gravity.  相似文献   

2.
Dark matter (DM) is an essential ingredient of the present Standard Cosmological Model, according to which only 5% of the mass/energy content of our universe is made of ordinary matter. In recent times, it has been argued that certain cases of gravitational lensing represent a new type of evidence for the existence of DM. In a recent paper, Peter Kosso attempts to substantiate that claim. His argument is that, although in such cases DM is only detected by its gravitational effects, gravitational lensing is a direct consequence of Einstein's Equivalence Principle (EEP) and therefore the complete gravitational theory is not needed in order to derive such lensing effects. In this paper I critically examine Kosso's argument: I confront the notion of empirical evidence involved in the discussion and argue that EEP does not have enough power by itself to sustain the claim that gravitational lensing in the Bullet Cluster constitutes evidence for the DM Hypothesis. As a consequence of this, it is necessary to examine the details of alternative theories of gravity to decide whether certain empirical situations are indeed evidence for the existence of DM. It may well be correct that gravitational lensing does constitute evidence for the DM Hypothesis—at present it is controversial whether the proposed modifications of gravitation all need DM to account for the phenomenon of gravitational lensing and if so, of which kind—but this will not be a direct consequence of EEP.  相似文献   

3.
A forecasting model based on high-frequency market makers quotes of financial instruments is presented. The statistical behaviour of these time series leads to discussion of the appropriate time scale for forecasting. We introduce variable time scales in a general way and define the new concept of intrinsic time. The latter reflects better the actual trading activity. Changing time scale means forecasting in two steps, first an intrinsic time forecast against physical time, then a price forecast against intrinsic time. The forecasting model consists, for both steps, of a linear combination of non-linear price-based indicators. The indicator weights are continuously re-optimized through a modified linear regression on a moving sample of past prices. The out-of-sample performance of this algorithm is reported on a set of important FX rates and interest rates over many years. It is remarkably consistent. Results for short horizons as well as techniques to measure this performance are discussed.  相似文献   

4.
The availability of numerous modeling approaches for volatility forecasting leads to model uncertainty for both researchers and practitioners. A large number of studies provide evidence in favor of combination methods for forecasting a variety of financial variables, but most of them are implemented on returns forecasting and evaluate their performance based solely on statistical evaluation criteria. In this paper, we combine various volatility forecasts based on different combination schemes and evaluate their performance in forecasting the volatility of the S&P 500 index. We use an exhaustive variety of combination methods to forecast volatility, ranging from simple techniques to time-varying techniques based on the past performance of the single models and regression techniques. We then evaluate the forecasting performance of single and combination volatility forecasts based on both statistical and economic loss functions. The empirical analysis in this paper yields an important conclusion. Although combination forecasts based on more complex methods perform better than the simple combinations and single models, there is no dominant combination technique that outperforms the rest in both statistical and economic terms.  相似文献   

5.
In econometrics, as a rule, the same data set is used to select the model and, conditional on the selected model, to forecast. However, one typically reports the properties of the (conditional) forecast, ignoring the fact that its properties are affected by the model selection (pretesting). This is wrong, and in this paper we show that the error can be substantial. We obtain explicit expressions for this error. To illustrate the theory we consider a regression approach to stock market forecasting, and show that the standard predictions ignoring pretesting are much less robust than naive econometrics might suggest. We also propose a forecast procedure based on the ‘neutral Laplace estimator’, which leads to an improvement over standard model selection procedures. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

6.
Modeling online auction prices is a popular research topic among statisticians and marketing analysts. Recent research mainly focuses on two directions: one is the functional data analysis (FDA) approach, in which the price–time relationship is modeled by a smooth curve, and the other is the point process approach, which directly models the arrival process of bidders and bids. In this paper, a novel model for the bid arrival process using a self‐exciting point process (SEPP) is proposed and applied to forecast auction prices. The FDA and point process approaches are linked together by using functional data analysis technique to describe the intensity of the bid arrival point process. Using the SEPP to model the bid arrival process, many stylized facts in online auction data can be captured. We also develop a simulation‐based forecasting procedure using the estimated SEPP intensity and historical bidding increment. In particular, prediction interval for the terminal price of merchandise can be constructed. Applications to eBay auction data of Harry Potter books and Microsoft Xbox show that the SEPP model provides more accurate and more informative forecasting results than traditional methods. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

7.
This paper criticizes the traditional philosophical account of the quantization of gauge theories and offers an alternative. On the received view, gauge theories resist quantization because they feature distinct mathematical representatives of the same physical state of affairs. This resistance is overcome by a sequence of ad hoc modifications, justified in part by reference to semiclassical electrodynamics. Among other things, these modifications introduce ”ghosts”: particles with unphysical properties which do not appear in asymptotic states and which are said to be purely a notational convenience. I argue that this sequence of modifications is unjustified and inadequate, making it a poor basis for the interpretation of ghosts. I then argue that gauge theories can be quantized by the same method as any other theory. On this account, ghosts are not purely notation: they are coordinates on the classical configuration space of the theory—specifically, on its gauge structure. This interpretation does not fall prey to the standard philosophical arguments against the significance of ghosts, due to Weingard. Weingard’s argumentative strategy, properly applied, in fact tells in favor of ghosts’ physical significance.  相似文献   

8.
A comparative review of the different systems of units that are most usual in electromagnetism leads to the proposal of a new system of units. In this system, the gravitational constant acquires the role of an interaction constant, both for gravitational and electromagnetic interaction, as a result of a redefinition of electric charge. In this way, the new system of units extends in a natural manner to mechanics. The comparison between the gravitational and electromagnetic interactions is of particular relevance.  相似文献   

9.
According to the algebraic approach to spacetime, a thoroughgoing dynamicism, physical fields exist without an underlying manifold. This view is usually implemented by postulating an algebraic structure (e.g., commutative ring) of scalar-valued functions, which can be interpreted as representing a scalar field, and deriving other structures from it. In this work, we point out that this leads to the unjustified primacy of an undetermined scalar field. Instead, we propose to consider algebraic structures in which all (and only) physical fields are primitive. We explain how the theory of natural operations in differential geometry—the modern formalism behind classifying diffeomorphism-invariant constructions—can be used to obtain concrete implementations of this idea for any given collection of fields. For concrete examples, we illustrate how our approach applies to a number of particular physical fields, including electrodynamics coupled to a Weyl spinor.  相似文献   

10.
The debate between ΛCDM and MOND is often cast in terms of competing gravitational theories. However, recent philosophical discussion suggests that the ΛCDM–MOND debate demonstrates the challenges of multiscale modeling in the context of cosmological scales. I extend this discussion and explore what happens when the debate is thought to be about modeling rather than about theory, offering a model-focused interpretation of the ΛCDM–MOND debate. This analysis shows how a model-focused interpretation of the debate provides a better understanding of challenges associated with extension to a different scale or domain, which are tied to commitments about explanatory fit.  相似文献   

11.
The paper considers the use of information by a panel of expert industry forecasters, focusing on their information-processing biases. The panel forecasts construction output by sector up to three years ahead. It is found that the biases observed in laboratory experiments, particularly ‘anchoring’, are observable. The expectations are formed by adjusting the previous forecast to take new information into account. By analysing forecast errors it is concluded that the panel overweight recently released information and do not understand the dynamics of the industry. However, their forecasts, both short and long term, are better than an alternative econometric model, and combining the two sources of forecasts leads to a deterioration in forecast accuracy. The expert forecasts can be ‘de-biased’, and this leads to the conclusion that it is better to optimally process information sources than to combine (optimally) alternative forecasts.  相似文献   

12.
The question of the existence of gravitational stress-energy in general relativity has exercised investigators in the field since the inception of the theory. Folklore has it that no adequate definition of a localized gravitational stress-energetic quantity can be given. Most arguments to that effect invoke one version or another of the Principle of Equivalence. I argue that not only are such arguments of necessity vague and hand-waving but, worse, are beside the point and do not address the heart of the issue. Based on a novel analysis of what it may mean for one tensor to depend in the proper way on another, which, en passant, provides a precise characterization of the idea of a “geometric object”, I prove that, under certain natural conditions, there can be no tensor whose interpretation could be that it represents gravitational stress-energy in general relativity. It follows that gravitational energy, such as it is in general relativity, is necessarily non-local. Along the way, I prove a result of some interest in own right about the structure of the associated jet bundles of the bundle of Lorentz metrics over spacetime. I conclude by showing that my results also imply that, under a few natural conditions, the Einstein field equation is the unique equation relating gravitational phenomena to spatiotemporal structure, and discuss how this relates to the non-localizability of gravitational stress-energy. The main theorem proven underlying all the arguments is considerably stronger than the standard result in the literature used for the same purposes (Lovelock's theorem of 1972): it holds in all dimensions (not only in four); it does not require an assumption about the differential order of the desired concomitant of the metric; and it has a more natural physical interpretation.  相似文献   

13.
14.
We propose an economically motivated forecast combination strategy in which model weights are related to portfolio returns obtained by a given forecast model. An empirical application based on an optimal mean–variance bond portfolio problem is used to highlight the advantages of the proposed approach with respect to combination methods based on statistical measures of forecast accuracy. We compute average net excess returns, standard deviation, and the Sharpe ratio of bond portfolios obtained with nine alternative yield curve specifications, as well as with 12 different forecast combination strategies. Return‐based forecast combination schemes clearly outperformed approaches based on statistical measures of forecast accuracy in terms of economic criteria. Moreover, return‐based approaches that dynamically select only the model with highest weight each period and discard all other models delivered even better results, evidencing not only the advantages of trimming forecast combinations but also the ability of the proposed approach to detect best‐performing models. To analyze the robustness of our results, different levels of risk aversion and a different dataset are considered.  相似文献   

15.
Except for a few brief periods, Einstein was uninterested in analysing the nature of the spacetime singularities that appeared in solutions to his gravitational field equations for general relativity. The existence of such monstrosities reinforced his conviction that general relativity was an incomplete theory which would be superseded by a singularity-free unified field theory. Nevertheless, on a number of occasions between 1916 and the end of his life, Einstein was forced to confront singularities. His reactions show a strange asymmetry: he tended to be more disturbed by (what today we would call) merely apparent singularities and less disturbed by (what we would call) real singularities. Einstein had strong a priori ideas about what results a correct physical theory should deliver. In the process of searching through theoretical possibilities, he tended to push aside technical problems and jump over essential difficulties. Sometimes this method of working produced brilliant new ideas—such as the Einstein–Rosen bridge—and sometimes it lead him to miss important implications of his theory of gravity—such as gravitational collapse.  相似文献   

16.
In Boltzmannian statistical mechanics macro-states supervene on micro-states. This leads to a partitioning of the state space of a system into regions of macroscopically indistinguishable micro-states. The largest of these regions is singled out as the equilibrium region of the system. What justifies this association? We review currently available answers to this question and find them wanting both for conceptual and for technical reasons. We propose a new conception of equilibrium and prove a mathematical theorem which establishes in full generality – i.e. without making any assumptions about the system׳s dynamics or the nature of the interactions between its components – that the equilibrium macro-region is the largest macro-region. We then turn to the question of the approach to equilibrium, of which there exists no satisfactory general answer so far. In our account, this question is replaced by the question when an equilibrium state exists. We prove another – again fully general – theorem providing necessary and sufficient conditions for the existence of an equilibrium state. This theorem changes the way in which the question of the approach to equilibrium should be discussed: rather than launching a search for a crucial factor (such as ergodicity or typicality), the focus should be on finding triplets of macro-variables, dynamical conditions, and effective state spaces that satisfy the conditions of the theorem.  相似文献   

17.
This paper systematically compares two frameworks for analysing technical artefacts: the Dual-Nature approach, exemplified by the contributions to Kroes and Meijers (2006), and the collectivist approach advocated by Schyfter (2009), following Kusch (1999). After describing the main tenets of both approaches, we show that there is significant overlap between them: both frameworks analyse the most typical cases of artefact use, albeit in different terms, but to largely the same extent. Then, we describe several kinds of cases for which the frameworks yield different analyses. For these cases, which include one-of-a-kind artefacts and defect types, the Dual-Nature framework leads to a more attractive analysis. Our comparison also gives us the opportunity to respond to Vaesen’s (2010, this issue) critical paper. We do so by distinguishing two readings of the Dual-Nature framework and pointing out that on the sustainable, weaker reading, Vaesen’s considerations supplement the framework rather than offering an alternative to it.  相似文献   

18.
For more than three decades, there has been significant debate about the relation between Feyerabend and Popper. The discussion has been nurtured and complicated by the rift that opened up between the two and by the later Feyerabend's controversial portrayal of his earlier self. The first part of the paper provides an overview of the accounts of the relation that have been proposed over the years, disentangles the problems they deal with, and analyses the evidence supporting their conclusions as well as the methodological approaches used to process that evidence. Rather than advancing a further speculative account of the relation based on Feyerabend's philosophical work or autobiographical recollections, the second part of the paper strives to clarify the problems at issue by making use of a wider range of evidence. It outlines a historical reconstruction of the social context within which Feyerabend's intellectual trajectory developed, putting a special emphasis on the interplay between the perceived intellectual identity of Feyerabend, Feyerabend's own intellectual self-concept, and the peculiar features of the evolving Popperian research group.  相似文献   

19.
The three basic modelling approaches used to explain forest fire behaviour are theoretically, laboratory or empirically based. Results of all three approaches are reviewed, but it is noted that only the laboratory- and empirically based models have led to forecasting techniques that are in widespread use. These are the Rothermel model and the McArthur meters, respectively. Field tests designed to test the performance of these operational models were carried out in tropical grasslands. A preliminary analysis indicated that the Rothermel model overpredicted spread rates while the McArthur model underpredicted. To improve the forecast of bushfire rate of spread available to operational firefighting crews it is suggested that a time-variable parameter (TYP) recursive least squares algorithm can be used to assign weights to the respective models, with the weights recursively updated as information on fire-front location becomes available. Results of this methodology when applied to US Grasslands fire experiment data indicate that the quality of the input combined with a priori knowledge of the performance of the candidate models plays an important role in the performance of the TVP algorithm. With high-quality input data, the Rothermel model on its own outperformed the TVP algorithm, but with slightly inferior data both approaches were comparable. Though the use of all available data in a multiple linear regression produces a lower sum of squared errors than the recursive, time-variable weighting approach, or that of any single model, the uncertainties of data input and consequent changes in weighting coefficients during operational conditions suggest the use of the TVP algorithm approach.  相似文献   

20.
According to Zurek, decoherence is a process resulting from the interaction between a quantum system and its environment; this process singles out a preferred set of states, usually called “pointer basis”, that determines which observables will receive definite values. This means that decoherence leads to a sort of selection which precludes all except a small subset of the states in the Hilbert space of the system from behaving in a classical manner: environment-induced-superselection—einselection—is a consequence of the process of decoherence. The aim of this paper is to present a new approach to decoherence, different from the mainstream approach of Zurek and his collaborators. We will argue that this approach offers conceptual advantages over the traditional one when problems of foundations are considered; in particular, from the new perspective, decoherence in closed quantum systems becomes possible and the preferred basis acquires a well founded definition.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号