首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   37212篇
  免费   2418篇
  国内免费   3258篇
系统科学   4521篇
丛书文集   899篇
教育与普及   112篇
理论与方法论   76篇
现状及发展   473篇
综合类   36802篇
自然研究   5篇
  2025年   3篇
  2024年   537篇
  2023年   555篇
  2022年   724篇
  2021年   769篇
  2020年   817篇
  2019年   696篇
  2018年   727篇
  2017年   875篇
  2016年   894篇
  2015年   1232篇
  2014年   1818篇
  2013年   1594篇
  2012年   2311篇
  2011年   2309篇
  2010年   1756篇
  2009年   2136篇
  2008年   1968篇
  2007年   2711篇
  2006年   2477篇
  2005年   2214篇
  2004年   1967篇
  2003年   1734篇
  2002年   1480篇
  2001年   1211篇
  2000年   1118篇
  1999年   1007篇
  1998年   754篇
  1997年   753篇
  1996年   630篇
  1995年   519篇
  1994年   472篇
  1993年   419篇
  1992年   364篇
  1991年   335篇
  1990年   328篇
  1989年   224篇
  1988年   200篇
  1987年   122篇
  1986年   63篇
  1985年   24篇
  1984年   11篇
  1983年   2篇
  1982年   2篇
  1981年   19篇
  1955年   7篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
11.
从程序语言的角度出发,应用一一映射的方法,建立起计算机病毒规范与运行条件的同构模型.在此基础上,提出了计算机病毒运行条件的基于共性的分类方法,并给出了其在病毒防范领域中的应用方法.该分类法的应用领域包括:软件开发人员的参考指南,计算机软件工程领域的研究素材,以及病毒测试人员或审计人员的"条件核查列表".  相似文献   
12.
The ability to improve out-of-sample forecasting performance by combining forecasts is well established in the literature. This paper advances this literature in the area of multivariate volatility forecasts by developing two combination weighting schemes that exploit volatility persistence to emphasise certain losses within the combination estimation period. A comprehensive empirical analysis of the out-of-sample forecast performance across varying dimensions, loss functions, sub-samples and forecast horizons show that new approaches significantly outperform their counterparts in terms of statistical accuracy. Within the financial applications considered, significant benefits from combination forecasts relative to the individual candidate models are observed. Although the more sophisticated combination approaches consistently rank higher relative to the equally weighted approach, their performance is statistically indistinguishable given the relatively low power of these loss functions. Finally, within the applications, further analysis highlights how combination forecasts dramatically reduce the variability in the parameter of interest, namely the portfolio weight or beta.  相似文献   
13.
This paper presents a new spatial dependence model with an adjustment of feature difference. The model accounts for the spatial autocorrelation in both the outcome variables and residuals. The feature difference adjustment in the model helps to emphasize feature changes across neighboring units, while suppressing unobserved covariates that are present in the same neighborhood. The prediction at a given unit incorporates components that depend on the differences between the values of its main features and those of its neighboring units. In contrast to conventional spatial regression models, our model does not require a comprehensive list of global covariates necessary to estimate the outcome variable at the unit, as common macro-level covariates are differenced away in the regression analysis. Using the real estate market data in Hong Kong, we applied Gibbs sampling to determine the posterior distribution of each model parameter. The result of our empirical analysis confirms that the adjustment of feature difference with an inclusion of the spatial error autocorrelation produces better out-of-sample prediction performance than other conventional spatial dependence models. In addition, our empirical analysis can identify components with more significant contributions.  相似文献   
14.
We use dynamic factors and neural network models to identify current and past states (instead of future) of the US business cycle. In the first step, we reduce noise in data by using a moving average filter. Dynamic factors are then extracted from a large-scale data set consisted of more than 100 variables. In the last step, these dynamic factors are fed into the neural network model for predicting business cycle regimes. We show that our proposed method follows US business cycle regimes quite accurately in-sample and out-of-sample without taking account of the historical data availability. Our results also indicate that noise reduction is an important step for business cycle prediction. Furthermore, using pseudo real time and vintage data, we show that our neural network model identifies turning points quite accurately and very quickly in real time.  相似文献   
15.
This paper is concerned with model averaging estimation for conditional volatility models. Given a set of candidate models with different functional forms, we propose a model averaging estimator and forecast for conditional volatility, and construct the corresponding weight-choosing criterion. Under some regulatory conditions, we show that the weight selected by the criterion asymptotically minimizes the true Kullback–Leibler divergence, which is the distributional approximation error, as well as the Itakura–Saito distance, which is the distance between the true and estimated or forecast conditional volatility. Monte Carlo experiments support our newly proposed method. As for the empirical applications of our method, we investigate a total of nine major stock market indices and make a 1-day-ahead volatility forecast for each data set. Empirical results show that the model averaging forecast achieves the highest accuracy in terms of all types of loss functions in most cases, which captures the movement of the unknown true conditional volatility.  相似文献   
16.
Projections of future climate change cannot rely on a single model. It has become common to rely on multiple simulations generated by Multi-Model Ensembles (MMEs), especially to quantify the uncertainty about what would constitute an adequate model structure. But, as Parker points out (2018), one of the remaining philosophically interesting questions is: “How can ensemble studies be designed so that they probe uncertainty in desired ways?” This paper offers two interpretations of what General Circulation Models (GCMs) are and how MMEs made of GCMs should be designed. In the first interpretation, models are combinations of modules and parameterisations; an MME is obtained by “plugging and playing” with interchangeable modules and parameterisations. In the second interpretation, models are aggregations of expert judgements that result from a history of epistemic decisions made by scientists about the choice of representations; an MME is a sampling of expert judgements from modelling teams. We argue that, while the two interpretations involve distinct domains from philosophy of science and social epistemology, they both could be used in a complementary manner in order to explore ways of designing better MMEs.  相似文献   
17.
This study examines whether the evaluation of a bankruptcy prediction model should take into account the total cost of misclassification. For this purpose, we introduce and apply a validity measure in credit scoring that is based on the total cost of misclassification. Specifically, we use comprehensive data from the annual financial statements of a sample of German companies and analyze the total cost of misclassification by comparing a generalized linear model and a generalized additive model with regard to their ability to predict a company's probability of default. On the basis of these data, the validity measure we introduce shows that, compared to generalized linear models, generalized additive models can reduce substantially the extent of misclassification and the total cost that this entails. The validity measure we introduce is informative and justifies the argument that generalized additive models should be preferred, although such models are more complex than generalized linear models. We conclude that to balance a model's validity and complexity, it is necessary to take into account the total cost of misclassification.  相似文献   
18.
We analyze multicategory purchases of households by means of heterogeneous multivariate probit models that relate to partitions formed from a total of 25 product categories. We investigate both prior and post hoc partitions. We search model structures by a stochastic algorithm and estimate models by Markov chain Monte Carlo simulation. The best model in terms of cross‐validated log‐likelihood refers to a post hoc partition with two groups; the second‐best model considers all categories as one group. Among prior partitions with at least two category groups a five‐group model performs best. Effects on average basket value differ for the model with five prior category groups from those for the best‐performing model in 40% and 24% of the investigated categories for features and displays, respectively. In addition, the model with five prior category groups also underestimates total sales revenue across all categories by about 28%. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
19.
In this paper we extend the works of Baillie and Baltagi (1999, in Analysis of Panels and Limited Dependent Variables Models, Hsiao C et al. (eds). Cambridge University Press: Cambridge, UK; 255–267) and generalize certain results from the Baltagi and Li (1992, Journal of Forecasting 11 : 561–567) paper accounting for AR(1) errors in the disturbance term. In particular, we derive six predictors for the one‐way error components model, as well as their associated asymptotic mean squared error of multi‐step prediction in the presence of AR(1) errors in the disturbance term. In addition, we also provide both theoretical and simulation evidence as to the relative efficiency of our alternative predictors. The adequacy of the prediction AMSE formula is also investigated by the use of Monte Carlo methods and indicates that the ordinary optimal predictor performs well for various accuracy criteria. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
20.
依据相关模型,提出了区域建筑业科技竞争力的评价指标体系,利用因子分析法对2010年全国30个省市的建筑业科技竞争力进行了测度和分析.实证结果显示,建筑业科技竞争力主要受到科技投入、技术基础、科技产出和科技保障四个方面要素的影响  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号