首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
运载火箭全箭动特性三维建模技术   总被引:5,自引:0,他引:5  
目前国内运载火箭动力学特性建模以梁模型为主,但梁模型是对运载火箭三维结构的一种简化,存在若干弊端.采用三维建模方法可以克服这些弊端.总结了作者近几年开展的运载火箭三维建模的研究工作,提出了运载火箭三维建模方法,包括硬壳与半硬壳结构建模方法、夹芯结构建模方法、发动机架及发动机建模方法、燃料液体建模方法、非结构质量建模方法以及连接建模方法.尤其针对液体建模,提出了燃料液体杆单元建模方法,使液体建模工作大为简化.通过工程实例应用以及和试验的对比,证明了建模方法的适用性.同时也提出了运载火箭建模当中的不确定因素.  相似文献   

2.
A large number of models have been developed in the literature to analyze and forecast changes in output dynamics. The objective of this paper was to compare the predictive ability of univariate and bivariate models, in terms of forecasting US gross national product (GNP) growth at different forecasting horizons, with the bivariate models containing information on a measure of economic uncertainty. Based on point and density forecast accuracy measures, as well as on equal predictive ability (EPA) and superior predictive ability (SPA) tests, we evaluate the relative forecasting performance of different model specifications over the quarterly period of 1919:Q2 until 2014:Q4. We find that the economic policy uncertainty (EPU) index should improve the accuracy of US GNP growth forecasts in bivariate models. We also find that the EPU exhibits similar forecasting ability to the term spread and outperforms other uncertainty measures such as the volatility index and geopolitical risk in predicting US recessions. While the Markov switching time‐varying parameter vector autoregressive model yields the lowest values for the root mean squared error in most cases, we observe relatively low values for the log predictive density score, when using the Bayesian vector regression model with stochastic volatility. More importantly, our results highlight the importance of uncertainty in forecasting US GNP growth rates.  相似文献   

3.
The S-shaped growth curves such as Gompertz, logistic, normal and Weibuli are widely used for forecasting technological substitutions. A family of data-based transformed (DBT) models, which are linear in the regression parameters, including the above-mentioned four models as special cases has been shown to be quite useful for short-term forecasts. This paper explores modeling the technology penetration data directly with assumed S-shaped growth curves. The resulting models, which are nonlinear in the regression parameters, also incorporate proper dependence structure and power transformation. It appears that the nonlinear modeling is a viable alternative to the DBT and other conventional forecasting models in forecasting technological substitutions. Hence, an appropriate strategy is to consider the nonlinear modeling approaches as possible alternatives and use the data at hand to select, via pseudo-cross-validation, the best model for forecasting purposes.  相似文献   

4.
This paper investigates the time-varying volatility patterns of some major commodities as well as the potential factors that drive their long-term volatility component. For this purpose, we make use of a recently proposed generalized autoregressive conditional heteroskedasticity–mixed data sampling approach, which typically allows us to examine the role of economic and financial variables of different frequencies. Using commodity futures for Crude Oil (WTI and Brent), Gold, Silver and Platinum, as well as a commodity index, our results show the necessity for disentangling the short-term and long-term components in modeling and forecasting commodity volatility. They also indicate that the long-term volatility of most commodity futures is significantly driven by the level of global real economic activity as well as changes in consumer sentiment, industrial production, and economic policy uncertainty. However, the forecasting results are not alike across commodity futures as no single model fits all commodities.  相似文献   

5.
Effectively explaining and accurately forecasting industrial stock volatility can provide crucial references to develop investment strategies, prevent market risk and maintain the smooth running of national economy. This paper aims to discuss the roles of industry‐level indicators in industrial stock volatility. Selecting Chinese manufacturing purchasing managers index (PMI) and its five component PMI as the proxies of industry‐level indicators, we analyze the contributions of PMI on industrial stock volatility and further compare the volatility forecasting performances of PMI, macroeconomic fundamentals and economic policy uncertainty (EPU), by constructing the individual and combination GARCH‐MIDAS models. The empirical results manifest that, first, most of the PMI has significant negative effects on industrial stock volatility. Second, PMI which focuses on the industrial sector itself is more helpful to forecast industrial stock volatility compared with the commonly used macroeconomic fundamentals and economic policy uncertainty. Finally, the combination GARCH‐MIDAS approaches based on DMA technique demonstrate more excellent predictive abilities than the individual GARCH‐MIDAS models. Our major conclusions are robust through various robustness checks.  相似文献   

6.
In this paper, we assess the predictive content of latent economic policy uncertainty and data surprise factors for forecasting and nowcasting gross domestic product (GDP) using factor-type econometric models. Our analysis focuses on five emerging market economies: Brazil, Indonesia, Mexico, South Africa, and Turkey; and we carry out a forecasting horse race in which predictions from various different models are compared. These models may (or may not) contain latent uncertainty and surprise factors constructed using both local and global economic datasets. The set of models that we examine in our experiments includes both simple benchmark linear econometric models as well as dynamic factor models that are estimated using a variety of frequentist and Bayesian data shrinkage methods based on the least absolute shrinkage operator (LASSO). We find that the inclusion of our new uncertainty and surprise factors leads to superior predictions of GDP growth, particularly when these latent factors are constructed using Bayesian variants of the LASSO. Overall, our findings point to the importance of spillover effects from global uncertainty and data surprises, when predicting GDP growth in emerging market economies.  相似文献   

7.
Difficulties over probability have often been considered fatal to the Everett interpretation of quantum mechanics. Here I argue that the Everettian can have everything she needs from ‘probability’ without recourse to indeterminism, ignorance, primitive identity over time or subjective uncertainty: all she needs is a particular rationality principle.The decision-theoretic approach recently developed by Deutsch and Wallace claims to provide just such a principle. But, according to Wallace, decision theory is itself applicable only if the correct attitude to a future Everettian measurement outcome is subjective uncertainty. I argue that subjective uncertainty is not available to the Everettian, but I offer an alternative: we can justify the Everettian application of decision theory on the basis that an Everettian should care about all her future branches. The probabilities appearing in the decision-theoretic representation theorem can then be interpreted as the degrees to which the rational agent cares about each future branch. This reinterpretation, however, reduces the intuitive plausibility of one of the Deutsch–Wallace axioms (measurement neutrality).  相似文献   

8.
Compared with point forecasting, interval forecasting is believed to be more effective and helpful in decision making, as it provides more information about the data generation process. Based on the well-established “linear and nonlinear” modeling framework, a hybrid model is proposed by coupling the vector error correction model (VECM) with artificial intelligence models which consider the cointegration relationship between the lower and upper bounds (Coin-AIs). VECM is first employed to fit the original time series with the residual error series modeled by Coin-AIs. Using pork price as a research sample, the empirical results statistically confirm the superiority of the proposed VECM-CoinAIs over other competing models, which include six single models and six hybrid models. This result suggests that considering the cointegration relationship is a workable direction for improving the forecast performance of the interval-valued time series. Moreover, with a reasonable data transformation process, interval forecasting is proven to be more accurate than point forecasting.  相似文献   

9.
In this paper we aim to improve existing empirical exchange rate models by accounting for uncertainty with respect to the underlying structural representation. Within a flexible Bayesian framework, our modeling approach assumes that different regimes are characterized by commonly used structural exchange rate models, with transitions across regimes being driven by a Markov process. We assume a time-varying transition probability matrix with transition probabilities depending on a measure of the monetary policy stance of the central bank at home and in the USA. We apply this model to a set of eight exchange rates against the US dollar. In a forecasting exercise, we show that model evidence varies over time, and a model approach that takes this empirical evidence seriously yields more accurate density forecasts for most currency pairs considered.  相似文献   

10.
We study the performance of recently developed linear regression models for interval data when it comes to forecasting the uncertainty surrounding future stock returns. These interval data models use easy‐to‐compute daily return intervals during the modeling, estimation and forecasting stage. They have to stand up to comparable point‐data models of the well‐known capital asset pricing model type—which employ single daily returns based on successive closing prices and might allow for GARCH effects—in a comprehensive out‐of‐sample forecasting competition. The latter comprises roughly 1000 daily observations on all 30 stocks that constitute the DAX, Germany's main stock index, for a period covering both the calm market phase before and the more turbulent times during the recent financial crisis. The interval data models clearly outperform simple random walk benchmarks as well as the point‐data competitors in the great majority of cases. This result does not only hold when one‐day‐ahead forecasts of the conditional variance are considered, but is even more evident when the focus is on forecasting the width or the exact location of the next day's return interval. Regression models based on interval arithmetic thus prove to be a promising alternative to established point‐data volatility forecasting tools. Copyright ©2015 John Wiley & Sons, Ltd.  相似文献   

11.
This paper provides an account of mid-level models which calibrate highly theoretical agent-based models of scientific communities by incorporating empirical information from real-world systems. As a result, these models more closely correspond with real-world communities, and are better suited for informing policy decisions than extant how-possibly models. I provide an exemplar of a mid-level model of science funding allocation that incorporates bibliometric data from scientific publications and data generated from empirical studies of peer review into an epistemic landscape model. The results of my model show that on a dynamic epistemic landscape, allocating funding by modified and pure lottery strategies performs comparably to a perfect selection funding allocation strategy. These results support the idea that introducing randomness into a funding allocation process may be a tractable policy worth exploring further through pilot studies. My exemplar shows that agent-based models need not be restricted to the abstract and the apriori; they can also be informed by empirical data.  相似文献   

12.
Value‐at‐Risk (VaR) is widely used as a tool for measuring the market risk of asset portfolios. However, alternative VaR implementations are known to yield fairly different VaR forecasts. Hence, every use of VaR requires choosing among alternative forecasting models. This paper undertakes two case studies in model selection, for the S&P 500 index and India's NSE‐50 index, at the 95% and 99% levels. We employ a two‐stage model selection procedure. In the first stage we test a class of models for statistical accuracy. If multiple models survive rejection with the tests, we perform a second stage filtering of the surviving models using subjective loss functions. This two‐stage model selection procedure does prove to be useful in choosing a VaR model, while only incompletely addressing the problem. These case studies give us some evidence about the strengths and limitations of present knowledge on estimation and testing for VaR. Copyright © 2003 John Wiley & Sons, Ltd.  相似文献   

13.
Following demands to regulate biomedicine in the post-war period, Sweden saw several political debates about research ethics in the 1970s. Many of the debates centered on fetal research and animal experiments. At stake were questions of moral permissibility, public transparency, and scientific freedom. However, these debates did not only reveal ethical disagreement—they also contributed to constructing new boundaries between life-forms. Taking a post-Marxist approach to discursive policy analysis, we argue that the meaning of both the “human” and the “animal” in these debates was shaped by a need to manage a legitimacy crisis for medical science. By analyzing Swedish government bills, motions, parliamentary debates, and committee memorials from the 1970s, we map out how fetal and animal research were constituted as policy problems. We place particular emphasis on the problematization of fetal and animal vulnerability. By comparing the debates, we trace out how a particular vision of the ideal life defined the human-animal distinction.  相似文献   

14.
Climate modeling is closely tied, through its institutions and practices, to observations from satellites and to the field sciences. The validity, quality and scientific credibility of models are based on interaction between models and observation data. In the case of numerical modeling of climate and climate change, validation is not solely a scientific interest: the legitimacy of computer modeling, as a tool of knowledge, has been called into question in order to deny the reality of any anthropogenic climate change; model validations thereby bring political issues into play as well. There is no systematic protocol of validation: one never validates a model in general, but the capacity of a model to account for a defined climatic phenomenon or characteristic. From practices observed in the two research centers developing and using a climate model in France, this paper reviews different ways in which the researchers establish links between models and empirical data (which are not reduced to the latter validating the former) and convince themselves that their models are valid. The analysis of validation practices—relating to parametrization, modes of variability, climatic phenomena, etc.—allows us to highlight some elements of the epistemology of modeling.  相似文献   

15.
Success in forecasting using mathematical/statistical models requires that the models be open to intervention by the user. In practice, a model is only one component of a forecasting system, which also includes the users/forecasters as integral components. Interaction between the user and the model is necessary to adequately cater for events and changes that go beyond the existing form of the model. In this paper we consider Bayesian forecasting models open to interventions, of essentially any form, to incorporate subjective information made available to the user. We discuss principles of intervention and derive theoretical results that provide the means to formally incorporate feedforward interventions into Bayesian models. Two example time series are considered to illustrate why and when such interventions may be necessary to sustain predictive performance.  相似文献   

16.
Projections of future climate change cannot rely on a single model. It has become common to rely on multiple simulations generated by Multi-Model Ensembles (MMEs), especially to quantify the uncertainty about what would constitute an adequate model structure. But, as Parker points out (2018), one of the remaining philosophically interesting questions is: “How can ensemble studies be designed so that they probe uncertainty in desired ways?” This paper offers two interpretations of what General Circulation Models (GCMs) are and how MMEs made of GCMs should be designed. In the first interpretation, models are combinations of modules and parameterisations; an MME is obtained by “plugging and playing” with interchangeable modules and parameterisations. In the second interpretation, models are aggregations of expert judgements that result from a history of epistemic decisions made by scientists about the choice of representations; an MME is a sampling of expert judgements from modelling teams. We argue that, while the two interpretations involve distinct domains from philosophy of science and social epistemology, they both could be used in a complementary manner in order to explore ways of designing better MMEs.  相似文献   

17.
Experimental modeling is the construction of theoretical models hand in hand with experimental activity. As explained in Section 1, experimental modeling starts with claims about phenomena that use abstract concepts, concepts whose conditions of realization are not yet specified; and it ends with a concrete model of the phenomenon, a model that can be tested against data. This paper argues that this process from abstract concepts to concrete models involves judgments of relevance, which are irreducibly normative. In Section 2, we show, on the basis of several case studies, how these judgments contribute to the determination of the conditions of realization of the abstract concepts and, at the same time, of the quantities that characterize the phenomenon under study. Then, in Section 3, we compare this view on modeling with other approaches that also have acknowledged the role of relevance judgments in science. To conclude, in Section 4, we discuss the possibility of a plurality of relevance judgments and introduce a distinction between locally and generally relevant factors.  相似文献   

18.
A smart, automated forecasting system is a kind of expert system for generating forecasts wholly or partly without human intervention. A pilot-scale system is reported in this paper. The interventions are made by a rulebase which describes the actual intervention procedures of the London Business School Centre for Economic Forecasting on one of the major macroeconometric forecasting models for the U.K. economy. The rulebase is sufficiently general as to be applicable to any forecasting model. One difference of the pilot model reported in this paper from conventional models is that policy behaviour is described entirely by rules rather than equations. This allows the use of thresholds, floors, ceilings, and discontinuities.  相似文献   

19.
This paper addresses issues such as: Does it always pay to combine individual forecasts of a variable? Should one combine an unbiased forecast with one that is heavily biased? Should one use optimal weights as suggested by Bates and Granger over twenty years ago? A simple model which accounts for the main features of individual forecasts is put forward. Bayesian analysis of the model using noninformative and informative prior probability densities is provided which extends and generalizes results obtained by Winkler (1981) and compared with non-Bayesian methods of combining forecasts relying explicitly on a statistical model for the individual forecasts. It is shown that in some instances it is sensible to use a simple average of individual forecasts instead of using Bates and Granger type weights. Finally, model uncertainty is considered and the issue of combining different models for individual forecasts is addressed.  相似文献   

20.
We decompose economic uncertainty into "good" and "bad" components according to the sign of innovations. Our results indicate that bad uncertainty provides stronger predictive content regarding future market volatility than good uncertainty. The asymmetric models with good and bad uncertainties forecast market volatility in a better way than the symmetric models with overall uncertainty. The combination for asymmetric uncertainty models significantly outperforms the benchmark of autoregression, as well as the combination for symmetric models. The revealed volatility predictability is further demonstrated to be economically significant in the framework of portfolio allocation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号