首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   261篇
  免费   34篇
  国内免费   2篇
系统科学   2篇
理论与方法论   3篇
现状及发展   175篇
研究方法   28篇
综合类   84篇
自然研究   5篇
  2022年   1篇
  2021年   1篇
  2020年   2篇
  2019年   10篇
  2018年   7篇
  2017年   9篇
  2016年   13篇
  2015年   11篇
  2014年   11篇
  2013年   20篇
  2012年   24篇
  2011年   30篇
  2010年   4篇
  2009年   10篇
  2008年   18篇
  2007年   15篇
  2006年   14篇
  2005年   21篇
  2004年   18篇
  2003年   13篇
  2002年   13篇
  2001年   4篇
  2000年   2篇
  1999年   5篇
  1998年   2篇
  1992年   1篇
  1988年   1篇
  1987年   1篇
  1986年   1篇
  1985年   1篇
  1983年   1篇
  1979年   1篇
  1978年   2篇
  1977年   1篇
  1972年   2篇
  1970年   3篇
  1968年   1篇
  1966年   1篇
  1963年   1篇
  1946年   1篇
排序方式: 共有297条查询结果,搜索用时 31 毫秒
1.
In recent years there has been a growing interest in exploiting potential forecast gains from the non‐linear structure of self‐exciting threshold autoregressive (SETAR) models. Statistical tests have been proposed in the literature to help analysts check for the presence of SETAR‐type non‐linearities in an observed time series. It is important to study the power and robustness properties of these tests since erroneous test results might lead to misspecified prediction problems. In this paper we investigate the robustness properties of several commonly used non‐linearity tests. Both the robustness with respect to outlying observations and the robustness with respect to model specification are considered. The power comparison of these testing procedures is carried out using Monte Carlo simulation. The results indicate that all of the existing tests are not robust to outliers and model misspecification. Finally, an empirical application applies the statistical tests to stock market returns of the four little dragons (Hong Kong, South Korea, Singapore and Taiwan) in East Asia. The non‐linearity tests fail to provide consistent conclusions most of the time. The results in this article stress the need for a more robust test for SETAR‐type non‐linearity in time series analysis and forecasting. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   
2.
The aim of the work was to investigate the fate of injectant coal in blast furnaces and the origin of extractable materials in blast furnace carryover dusts. Two sets of samples including injectant coal and the corresponding carryover dusts from a full sized blast furnace and a pilot scale rig have been examined. The samples were extracted using 1-methyl-2-pyrrolidinone (NMP) solvent and the extracts studied by size exclusion chromatography (SEC). The blast furnace carryover dust extracts contained high molecular weight carbonaceous material, of apparent mass corresponding to 107?108 u, by polystyrene calibration. In contrast, the feed coke and char prepared in a wire mesh reactor under high temperature conditions did not give any extractable material. Meanwhile, controlled combustion experiments in a high-pressure wire mesh reactor suggest that the extent of combustion of injectant coal in the blast furnace tuyeres and raceways is limited by time of exposure and very low oxygen concentration. It is thus likely that the extractable, soot-like material in the blast furnace dust originated in tars is released by the injectant coal. Our results suggest that the unburned tars were thermally altered during the upward path within the furnace, giving rise to the formation of heavy molecular weight (soot-like) materials.  相似文献   
3.
Observing that a sequence of negative logarithms of 1‐year survival probabilities displays a linear relationship with the sequence of corresponding terms with a time lag of a certain number of years, we propose a simple linear regression to model and forecast mortality rates. Our model assuming the linearity between two mortality sequences with a time lag each other does not need to formulate the time trends of mortality rates across ages for mortality prediction. Moreover, the parameters of our model for a given age depend on the mortality rates for that age only. Therefore, whether the span of the study ages with the age included is widened or shortened will not affect the results of mortality fitting and forecasting for that age. In the empirical testing, the regression results using the mortality data for the UK, USA and Japan show a satisfactory goodness of fit, which convinces us of the appropriateness of the linear assumption. Empirical illustrations further show that our model's performances of fitting and forecasting mortality rates are quite satisfactory compared with the existing well‐known mortality models. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
4.
5.
Using option market data we derive naturally forward‐looking, nonparametric and model‐free risk estimates, three desired characteristics hardly obtainable using historical returns. The option‐implied measures are only based on the first derivative of the option price with respect to the strike price, bypassing the difficult task of estimating the tail of the return distribution. We estimate and backtest the 1%, 2.5%, and 5% WTI crude oil futures option‐implied value at risk and conditional value at risk for the turbulent years 2011–2016 and for both tails of the distribution. Compared with risk estimations based on the filtered historical simulation methodology, our results show that the option‐implied risk metrics are valid alternatives to the statistically based historical models.  相似文献   
6.
We investigate the energy nonadditivity relationship E(AαB) = E(A) + E(B) + αE(A)E(B) which is often considered in the development of the statistical physics of nonextensive systems. It was recently found that α in this equation was not constant for a given system in a given situation and could not characterize nonextensivity for that system. In this work, we select several typical nonextensive systems and compute the behavior of α when a system changes its size or is divided into subsystems in different fashions. Three kinds of interactions are considered. It is found by a thought experiment that α depends on the system size and the interaction as expected and on the way we divide the system. However, one of the major results of this work is that, for given system, α has a minimum with respect to division position. Around this position, there is a zone in which α is more or less constant, a situation where the sizes of the subsystems are comparable. The width of this zone depends on the interaction and on the system size. We conclude that if α is considered approximately constant in this zone, the two mathematical difficulties raised in previous studies are solved, meaning that the nonadditive relationship can characterize the nonadditivity of the system as an approximation. In all the cases, α tends to zero in the thermodynamic limit (N→∞) as expected.  相似文献   
7.
Host genetics has an important role in leprosy, and variants in the shared promoter region of PARK2 and PACRG were the first major susceptibility factors identified by positional cloning. Here we report the linkage disequilibrium mapping of the second linkage peak of our previous genome-wide scan, located close to the HLA complex. In both a Vietnamese familial sample and an Indian case-control sample, the low-producing lymphotoxin-alpha (LTA)+80 A allele was significantly associated with an increase in leprosy risk (P = 0.007 and P = 0.01, respectively). Analysis of an additional case-control sample from Brazil and an additional familial sample from Vietnam showed that the LTA+80 effect was much stronger in young individuals. In the combined sample of 298 Vietnamese familial trios, the odds ratio of leprosy for LTA+80 AA/AC versus CC subjects was 2.11 (P = 0.000024), which increased to 5.63 (P = 0.0000004) in the subsample of 121 trios of affected individuals diagnosed before 16 years of age. In addition to identifying LTA as a major gene associated with early-onset leprosy, our study highlights the critical role of case- and population-specific factors in the dissection of susceptibility variants in complex diseases.  相似文献   
8.
We study the performance of recently developed linear regression models for interval data when it comes to forecasting the uncertainty surrounding future stock returns. These interval data models use easy‐to‐compute daily return intervals during the modeling, estimation and forecasting stage. They have to stand up to comparable point‐data models of the well‐known capital asset pricing model type—which employ single daily returns based on successive closing prices and might allow for GARCH effects—in a comprehensive out‐of‐sample forecasting competition. The latter comprises roughly 1000 daily observations on all 30 stocks that constitute the DAX, Germany's main stock index, for a period covering both the calm market phase before and the more turbulent times during the recent financial crisis. The interval data models clearly outperform simple random walk benchmarks as well as the point‐data competitors in the great majority of cases. This result does not only hold when one‐day‐ahead forecasts of the conditional variance are considered, but is even more evident when the focus is on forecasting the width or the exact location of the next day's return interval. Regression models based on interval arithmetic thus prove to be a promising alternative to established point‐data volatility forecasting tools. Copyright ©2015 John Wiley & Sons, Ltd.  相似文献   
9.
This paper discusses techniques that might be helpful in predicting interest rates and tries to evaluate a new hybrid forecasting approach. Results of examining government bond yields in Germany and France reported in this study indicate that a hybrid forecasting approach which combines techniques of cointegration analysis with neural network (NN) forecasting models can produce superior results to the use of NN forecasting models alone. The findings documented in this paper could be a consequence of the fact that examining differenced data under certain conditions will lead to a loss of information and that the inclusion of the error correction term from the cointegration model can help to cope with this problem. The paper also discusses some possibly interesting directions for further research. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
10.
For forecasting nonstationary and nonlinear energy prices time series, a novel adaptive multiscale ensemble learning paradigm incorporating ensemble empirical mode decomposition (EEMD), particle swarm optimization (PSO) and least square support vector machines (LSSVM) with kernel function prototype is developed. Firstly, the extrema symmetry expansion EEMD, which can effectively restrain the mode mixing and end effects, is used to decompose the energy price into simple modes. Secondly, by using the fine‐to‐coarse reconstruction algorithm, the high‐frequency, low‐frequency and trend components are identified. Furthermore, autoregressive integrated moving average is applicable to predicting the high‐frequency components. LSSVM is suitable for forecasting the low‐frequency and trend components. At the same time, a universal kernel function prototype is introduced for making up the drawbacks of single kernel function, which can adaptively select the optimal kernel function type and model parameters according to the specific data using the PSO algorithm. Finally, the prediction results of all the components are aggregated into the forecasting values of energy price time series. The empirical results show that, compared with the popular prediction methods, the proposed method can significantly improve the prediction accuracy of energy prices, with high accuracy both in the level and directional predictions. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号