首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22804篇
  免费   981篇
  国内免费   1560篇
系统科学   1439篇
丛书文集   669篇
教育与普及   100篇
理论与方法论   84篇
现状及发展   243篇
综合类   22803篇
自然研究   7篇
  2024年   86篇
  2023年   224篇
  2022年   312篇
  2021年   388篇
  2020年   411篇
  2019年   326篇
  2018年   357篇
  2017年   420篇
  2016年   416篇
  2015年   641篇
  2014年   994篇
  2013年   803篇
  2012年   1165篇
  2011年   1269篇
  2010年   978篇
  2009年   1114篇
  2008年   1173篇
  2007年   1660篇
  2006年   1648篇
  2005年   1552篇
  2004年   1383篇
  2003年   1184篇
  2002年   1025篇
  2001年   836篇
  2000年   746篇
  1999年   589篇
  1998年   497篇
  1997年   471篇
  1996年   415篇
  1995年   365篇
  1994年   347篇
  1993年   300篇
  1992年   252篇
  1991年   227篇
  1990年   220篇
  1989年   207篇
  1988年   146篇
  1987年   117篇
  1986年   52篇
  1985年   16篇
  1984年   4篇
  1983年   1篇
  1981年   3篇
  1955年   5篇
排序方式: 共有10000条查询结果,搜索用时 250 毫秒
991.
This paper proposes new methods for ‘targeting’ factors estimated from a big dataset. We suggest that forecasts of economic variables can be improved by tuning factor estimates: (i) so that they are both more relevant for a specific target variable; and (ii) so that variables with considerable idiosyncratic noise are down‐weighted prior to factor estimation. Existing targeted factor methodologies are limited to estimating the factors with only one of these two objectives in mind. We therefore combine these ideas by providing new weighted principal components analysis (PCA) procedures and a targeted generalized PCA (TGPCA) procedure. These methods offer a flexible combination of both types of targeting that is new to the literature. We illustrate this empirically by forecasting a range of US macroeconomic variables, finding that our combined approach yields important improvements over competing methods, consistently surviving elimination in the model confidence set procedure. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
992.
Micro panels characterized by large numbers of individuals observed over a short time period provide a rich source of information, but as yet there is only limited experience in using such data for forecasting. Existing simulation evidence supports the use of a fixed‐effects approach when forecasting but it is not based on a truly micro panel set‐up. In this study, we exploit the linkage of a representative survey of more than 250,000 Australians aged 45 and over to 4 years of hospital, medical and pharmaceutical records. The availability of panel health cost data allows the use of predictors based on fixed‐effects estimates designed to guard against possible omitted variable biases associated with unobservable individual specific effects. We demonstrate the preference towards fixed‐effects‐based predictors is unlikely to hold in many practical situations, including our models of health care costs. Simulation evidence with a micro panel set‐up adds support and additional insights to the results obtained in the application. These results are supportive of the use of the ordinary least squares predictor in a wide range of circumstances. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
993.
This paper evaluates the performance of conditional variance models using high‐frequency data of the National Stock Index (S&P CNX NIFTY) and attempts to determine the optimal sampling frequency for the best daily volatility forecast. A linear combination of the realized volatilities calculated at two different frequencies is used as benchmark to evaluate the volatility forecasting ability of the conditional variance models (GARCH (1, 1)) at different sampling frequencies. From the analysis, it is found that sampling at 30 minutes gives the best forecast for daily volatility. The forecasting ability of these models is deteriorated, however, by the non‐normal property of mean adjusted returns, which is an assumption in conditional variance models. Nevertheless, the optimum frequency remained the same even in the case of different models (EGARCH and PARCH) and different error distribution (generalized error distribution, GED) where the error is reduced to a certain extent by incorporating the asymmetric effect on volatility. Our analysis also suggests that GARCH models with GED innovations or EGRACH and PARCH models would give better estimates of volatility with lower forecast error estimates. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
994.
决策树算法是数据挖掘中重要的分类算法,本文首先阐述了数据挖掘中决策树的基本思想,然后针对ID3算法倾向于取值较多的属性的缺点。对ID3算法予以改进,并通过实验对改进前后的算法进行了比较,实验表明改进后的算法是有效的。  相似文献   
995.
在应用数据包络分析方法进行评价时,要求所有指标必须具有偏好性,即所有指标必须越大越好或者越小越好.然而,当评价指标体系中含有中性(即没有偏好性)指标时,传统的数据包络分析方法则不能解决该类问题.因此,本文以经济效率与产业结构调整为背景,从系统性的角度出发提出了一种用于评价含有中性指标的数据包络分析模型.该模型不仅能给出一个经济系统的效率大小,而且还能给出该系统应如何通过产业结构调整来提高经济系统的效率.最后,应用本文提出的方法分析了天津市经济结构调整的有效性问题.  相似文献   
996.
In this paper, we propose a framework to evaluate the subjective density forecasts of macroeconomists using micro data from the euro area Survey of Professional Forecasters (SPF). A key aspect of our analysis is the use of evaluation measures which take account of the entire predictive densities, and not just the probability assigned to the outcome that occurs. Overall, we find considerable heterogeneity in the performance of the surveyed densities at the individual level. However, it is hard to exploit this heterogeneity and improve aggregate performance by trimming poorly performing forecasters in real time. Relative to a set of simple benchmarks, density performance is somewhat better for GDP growth than for inflation, although in the former case it diminishes substantially with the forecast horizon. In addition, we report evidence of an improvement in the relative performance of expert densities during the recent period of macroeconomic volatility. However, our analysis also reveals clear evidence of overconfidence or neglected risks in expert probability assessments, as reflected in frequent occurrences of events which are assigned a zero probability. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
997.
本文提出一种基于音乐基因的乐谱存储模型S-MusicXML.将乐谱的存储和处理的基本单位由音阶提升到基因,有利于通过数据挖掘技术对音乐内涵的挖掘和存储.定义了旋律基因等概念,并通过实验进一步分析了挖掘音乐基因比挖掘音乐频繁模式更有优势.  相似文献   
998.
由天津大学软件学院发起并召集,东北大学秦皇岛分校和燕山大学承办的信息可视化战略研讨会于2012年6月9-10日在北戴河成功举行.本文总结了此次会议的讨论内容和达成的共识,对当前信息可视化和可视分析领域的挑战和国际上这方面的研究方向做了大致的分析,并对今后信息可视化和可视分析的大方向提出一些建议,以及对具体行动提出一系列倡议.  相似文献   
999.
基于PSO的加权关联规则挖掘算法   总被引:1,自引:0,他引:1  
简要描述了加权关联规则问题及离散粒子群优化算法,提出了一种基于粒子群优化(PSO)算法的加权关联规则挖掘算法(PSO-WMAR).实验证明,本算法运行时间更省,产生的规则数更少且更有效.该算法具有以下特点:1)把关联规则挖掘的两个阶段结合在一起,无须先挖掘出全部频繁项目集然后再提取规则;2)只需要扫描一次数据库;3)把兴趣度引入适合度函数之中,挖掘出的规则数量更少、更有效;4)求加权频繁项目集无须查找所有候选加权频繁项目集,或者求频繁项目集的高序子集或非频繁项目集的低序超集.  相似文献   
1000.
粒子的动力学模型和质量公式   总被引:1,自引:1,他引:0  
简述了已知的粒子质量公式,由动力学的对称性自发破缺机制导出粒子的动力学模型和振动-转动模型,其简化形式是谐振子模型.由此得出某些与实验符合的定量结论和重味强子的质量关系,同时提出了新的轻子-介子(v,e;μ,π,K)与重子(p,n;Λ,Σ,Ξ)同位旋I相应的对称性.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号