首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   13006篇
  免费   526篇
  国内免费   1151篇
系统科学   1437篇
丛书文集   428篇
教育与普及   52篇
理论与方法论   40篇
现状及发展   645篇
研究方法   1篇
综合类   12075篇
自然研究   5篇
  2024年   53篇
  2023年   105篇
  2022年   166篇
  2021年   204篇
  2020年   242篇
  2019年   211篇
  2018年   166篇
  2017年   241篇
  2016年   254篇
  2015年   366篇
  2014年   551篇
  2013年   481篇
  2012年   705篇
  2011年   736篇
  2010年   598篇
  2009年   702篇
  2008年   681篇
  2007年   875篇
  2006年   821篇
  2005年   749篇
  2004年   715篇
  2003年   605篇
  2002年   567篇
  2001年   545篇
  2000年   484篇
  1999年   442篇
  1998年   332篇
  1997年   351篇
  1996年   266篇
  1995年   228篇
  1994年   216篇
  1993年   177篇
  1992年   167篇
  1991年   148篇
  1990年   141篇
  1989年   123篇
  1988年   98篇
  1987年   77篇
  1986年   37篇
  1985年   15篇
  1984年   13篇
  1983年   10篇
  1982年   9篇
  1981年   8篇
  1955年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
151.
For forecasting nonstationary and nonlinear energy prices time series, a novel adaptive multiscale ensemble learning paradigm incorporating ensemble empirical mode decomposition (EEMD), particle swarm optimization (PSO) and least square support vector machines (LSSVM) with kernel function prototype is developed. Firstly, the extrema symmetry expansion EEMD, which can effectively restrain the mode mixing and end effects, is used to decompose the energy price into simple modes. Secondly, by using the fine‐to‐coarse reconstruction algorithm, the high‐frequency, low‐frequency and trend components are identified. Furthermore, autoregressive integrated moving average is applicable to predicting the high‐frequency components. LSSVM is suitable for forecasting the low‐frequency and trend components. At the same time, a universal kernel function prototype is introduced for making up the drawbacks of single kernel function, which can adaptively select the optimal kernel function type and model parameters according to the specific data using the PSO algorithm. Finally, the prediction results of all the components are aggregated into the forecasting values of energy price time series. The empirical results show that, compared with the popular prediction methods, the proposed method can significantly improve the prediction accuracy of energy prices, with high accuracy both in the level and directional predictions. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
152.
应用混沌相空间模线性回归模型研究短期负荷预报   总被引:11,自引:0,他引:11  
在一维 Lyapunov指数预报模型的基础上提出了混沌相空间模线性回归模型 ,并将该模型应用于短期负荷预报 .对厦门市实际负荷时间序列进行预报 ,取得了较满意的结果 .  相似文献   
153.
徐洪焱  易才凤 《江西科学》2006,24(1):4-6,10
研究了下侧D irichlet级数和下侧随机D irichlet级数在左半平面,任何左半带形以及左半水平直线的增长性,型之间的关系。  相似文献   
154.
Micro panels characterized by large numbers of individuals observed over a short time period provide a rich source of information, but as yet there is only limited experience in using such data for forecasting. Existing simulation evidence supports the use of a fixed‐effects approach when forecasting but it is not based on a truly micro panel set‐up. In this study, we exploit the linkage of a representative survey of more than 250,000 Australians aged 45 and over to 4 years of hospital, medical and pharmaceutical records. The availability of panel health cost data allows the use of predictors based on fixed‐effects estimates designed to guard against possible omitted variable biases associated with unobservable individual specific effects. We demonstrate the preference towards fixed‐effects‐based predictors is unlikely to hold in many practical situations, including our models of health care costs. Simulation evidence with a micro panel set‐up adds support and additional insights to the results obtained in the application. These results are supportive of the use of the ordinary least squares predictor in a wide range of circumstances. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
155.
As a consequence of recent technological advances and the proliferation of algorithmic and high‐frequency trading, the cost of trading in financial markets has irrevocably changed. One important change, known as price impact, relates to how trading affects prices. Price impact represents the largest cost associated with trading. Forecasting price impact is very important as it can provide estimates of trading profits after costs and also suggest optimal execution strategies. Although several models have recently been developed which may forecast the immediate price impact of individual trades, limited work has been done to compare their relative performance. We provide a comprehensive performance evaluation of these models and test for statistically significant outperformance amongst candidate models using out‐of‐sample forecasts. We find that normalizing price impact by its average value significantly enhances the performance of traditional non‐normalized models as the normalization factor captures some of the dynamics of price impact. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
156.
Little Cottonwood Canyon Highway is a dead‐end, two‐lane road leading to Utah's Alta and Snowbird ski resorts. It is the only road access to these resorts and is heavily traveled during the ski season. Professional avalanche forecasters monitor this road throughout the ski season in order to make road closure decisions in the face of avalanche danger. Forecasters at the Utah Department of Transportation (UDOT) avalanche guard station at Alta have maintained an extensive daily winter database on explanatory variables relating to avalanche prediction. Whether or not an avalanche crosses the road is modeled in this paper via Bayesian additive tree methods. Utilizing daily winter data from 1995 to 2011, results show that using Bayesian tree analysis outperforms traditional statistical methods in terms of realized misclassification costs that take into consideration asymmetric losses arising from two types of error. Closing the road when an avalanche does not occur is an error harmful to resort owners, and not closing the road when one does may result in injury or death. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
157.
This intention of this paper is to empirically forecast the daily betas of a few European banks by means of four generalized autoregressive conditional heteroscedasticity (GARCH) models and the Kalman filter method during the pre‐global financial crisis period and the crisis period. The four GARCH models employed are BEKK GARCH, DCC GARCH, DCC‐MIDAS GARCH and Gaussian‐copula GARCH. The data consist of daily stock prices from 2001 to 2013 from two large banks each from Austria, Belgium, Greece, Holland, Ireland, Italy, Portugal and Spain. We apply the rolling forecasting method and the model confidence sets (MCS) to compare the daily forecasting ability of the five models during one month of the pre‐crisis (January 2007) and the crisis (January 2013) periods. Based on the MCS results, the BEKK proves the best model in the January 2007 period, and the Kalman filter overly outperforms the other models during the January 2013 period. Results have implications regarding the choice of model during different periods by practitioners and academics. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
158.
The increasing amount of attention paid to longevity risk and funding for old age has created the need for precise mortality models and accurate future mortality forecasts. Orthogonal polynomials have been widely used in technical fields and there have also been applications in mortality modeling. In this paper we adopt a flexible functional form approach using two‐dimensional Legendre orthogonal polynomials to fit and forecast mortality rates. Unlike some of the existing mortality models in the literature, the model we propose does not impose any restrictions on the age, time or cohort structure of the data and thus allows for different model designs for different countries' mortality experience. We conduct an empirical study using male mortality data from a range of developed countries and explore the possibility of using age–time effects to capture cohort effects in the underlying mortality data. It is found that, for some countries, cohort dummies still need to be incorporated into the model. Moreover, when comparing the proposed model with well‐known mortality models in the literature, we find that our model provides comparable fitting but with a much smaller number of parameters. Based on 5‐year‐ahead mortality forecasts, it can be concluded that the proposed model improves the overall accuracy of the future mortality projection. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
159.
The paper investigates the determinants of the US dollar/euro within the framework of the asset pricing theory of exchange rate determination, which posits that current exchange rate fluctuations are determined by the entire path of current and future revisions in expectations about fundamentals. In this perspective, we innovate by conditioning on Fama–French and Carhart risk factors, which directly measures changing market expectations about the economic outlook, on new financial condition indexes and macroeconomic variables. The macro‐finance augmented econometric model has a remarkable in‐sample and out‐of‐sample predictive ability, largely outperforming a standard autoregressive specification. We also document a stable relationship between the US dollar/euro Carhart momentum conditional correlation (CCW) and the euro area business cycle. CCW signals a progressive weakening in economic conditions since June 2014, consistent with the scattered recovery from the sovereign debt crisis and the new Greek solvency crisis exploded in late spring/early summer 2015. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
160.
This paper proposes new methods for ‘targeting’ factors estimated from a big dataset. We suggest that forecasts of economic variables can be improved by tuning factor estimates: (i) so that they are both more relevant for a specific target variable; and (ii) so that variables with considerable idiosyncratic noise are down‐weighted prior to factor estimation. Existing targeted factor methodologies are limited to estimating the factors with only one of these two objectives in mind. We therefore combine these ideas by providing new weighted principal components analysis (PCA) procedures and a targeted generalized PCA (TGPCA) procedure. These methods offer a flexible combination of both types of targeting that is new to the literature. We illustrate this empirically by forecasting a range of US macroeconomic variables, finding that our combined approach yields important improvements over competing methods, consistently surviving elimination in the model confidence set procedure. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号