首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   14016篇
  免费   1242篇
  国内免费   477篇
系统科学   1465篇
丛书文集   259篇
教育与普及   224篇
理论与方法论   318篇
现状及发展   925篇
研究方法   37篇
综合类   12503篇
自然研究   4篇
  2024年   126篇
  2023年   113篇
  2022年   224篇
  2021年   222篇
  2020年   147篇
  2019年   71篇
  2018年   818篇
  2017年   847篇
  2016年   573篇
  2015年   298篇
  2014年   330篇
  2013年   357篇
  2012年   610篇
  2011年   1314篇
  2010年   1155篇
  2009年   856篇
  2008年   982篇
  2007年   1218篇
  2006年   373篇
  2005年   389篇
  2004年   386篇
  2003年   458篇
  2002年   427篇
  2001年   374篇
  2000年   303篇
  1999年   376篇
  1998年   319篇
  1997年   328篇
  1996年   281篇
  1995年   210篇
  1994年   227篇
  1993年   217篇
  1992年   161篇
  1991年   157篇
  1990年   140篇
  1989年   121篇
  1988年   98篇
  1987年   55篇
  1986年   34篇
  1985年   18篇
  1984年   5篇
  1982年   1篇
  1980年   1篇
  1979年   3篇
  1974年   1篇
  1973年   3篇
  1972年   1篇
  1971年   3篇
  1967年   2篇
  1966年   1篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
141.
<正> This paper proposes a systematic design method of overlap frequency domain equalization(FDE) for single carrier (SC) transmission without a guard interval (GI).Based on the analysis of signal-to-interference-plus-noise ratio (SINR) of the equalizer output for each symbol,the authors adaptivelydetermine the block of the overlap FDE,where the block is defined as a set of symbols at the equalizeroutput with sufficiently low error rate,for a certain fixed sliding window size,which corresponds toa fast Fourier transform (FFT) window size.The proposed method takes advantage of the fact thatthe utility part of the equalized signal is localized around the center of the FFT window.In addition,the authors also propose to adjust the block size in order to control the computational complexity ofthe equalization per processed sample associating with the average bit error rate (BER) of the system.Simulation results show that the proposed scheme can achieve comparable BER performance to theconventional SC-FDE scheme with sufficient GI insertion for both the coded and uncoded cases withvarious modulation levels,while requiring lower computational complexity compared to the SC overlapFDE transmission with the fixed block.  相似文献   
142.
安全的稻米产地环境是保障稻米镉含量不超标的基础,而正确确定稻米产地土壤镉安全阈值是进行产地安全管理的关键。为此,以杭嘉湖稻米主产区为例,通过土壤-稻米对应采样分别获得了研究区118个土壤-稻米样品,并基于监测数据采用线性回归与蒙特卡洛耦合模型法,回归方程法及吸收系数法构建了研究区稻米产地镉安全阈值。研究结果表明,土壤镉含量、土壤pH、土壤质地是影响稻米镉含量的主要因素。线性回归与蒙特卡洛耦合模型分析结果表明,在稻米镉达标期望概率为70%、80%、90%、100%时,土壤镉安全阈值分别为1.2 mg/kg、1.0 mg/kg、0.8 mg/kg及不能确定。采用吸收系数法计算的安全阈值为1.2 mg/kg,而直接采用监测数据的平均值代入回归方程得到的产地土壤安全阈值为1.5 mg/kg。不同安全需求下的土壤镉安全阈值有差异显著,运用回归方程法得出的结论高于令稻米镉含量达标率为70%的土壤镉安全阈值,吸收系数法得出的结论与令稻米镉含量达标率为70%的土壤镉安全阈值相一致。且结果均高于土壤环境质量标准二级镉阈值。  相似文献   
143.
高碾压混凝土坝一般采用大仓面薄层碾压,高温期预冷混凝土运输过程中冷量损失大、仓面摊铺及碾压过程中温度倒灌严重,骨料预冷效率低.提出在合理通水的前提下取消高碾压混凝土坝骨料预冷的温控理念;结合沙沱工程,从温控技术可行性及数值计算结果两方面分析论证了高碾压混凝土坝取消骨料预冷的可行性,并对取消骨料预冷条件下的温控措施提出几点建议.  相似文献   
144.
This paper introduces a novel mixture model-based approach to the simultaneous clustering and optimal segmentation of functional data, which are curves presenting regime changes. The proposed model consists of a finite mixture of piecewise polynomial regression models. Each piecewise polynomial regression model is associated with a cluster, and within each cluster, each piecewise polynomial component is associated with a regime (i.e., a segment). We derive two approaches to learning the model parameters: the first is an estimation approach which maximizes the observed-data likelihood via a dedicated expectation-maximization (EM) algorithm, then yielding a fuzzy partition of the curves into K clusters obtained at convergence by maximizing the posterior cluster probabilities. The second is a classification approach and optimizes a specific classification likelihood criterion through a dedicated classification expectation-maximization (CEM) algorithm. The optimal curve segmentation is performed by using dynamic programming. In the classification approach, both the curve clustering and the optimal segmentation are performed simultaneously as the CEM learning proceeds. We show that the classification approach is a probabilistic version generalizing the deterministic K-means-like algorithm proposed in Hébrail, Hugueney, Lechevallier, and Rossi (2010). The proposed approach is evaluated using simulated curves and real-world curves. Comparisons with alternatives including regression mixture models and the K-means-like algorithm for piecewise regression demonstrate the effectiveness of the proposed approach.  相似文献   
145.
Cognitive diagnostic models provide valuable information on whether a student has mastered each of the attributes a test intends to evaluate. Despite its generality, the generalized DINA model allows for the possibility of lower correct rates for students who master more attributes than those who know less. This paper considers the use of order-constrained parameter space of the G-DINA model to avoid such a counter-intuitive phenomenon and proposes two algorithms, the upward and downward methods, for parameter estimation. Through simulation studies, we compare the accuracy in parameter estimation and in classification of attribute patterns obtained from the proposed two algorithms and the current approach when the restricted parameter space is true. Our results show that the upward method performs the best among the three, and therefore it is recommended for estimation, regardless of the distribution of respondents’ attribute patterns, types of test items, and the sample size of the data.  相似文献   
146.
Traditionally latent class (LC) analysis is used by applied researchers as a tool for identifying substantively meaningful clusters. More recently, LC models have also been used as a density estimation tool for categorical variables. We introduce a divisive LC (DLC) model as a density estimation tool that may offer several advantages in comparison to a standard LC model. When using an LC model for density estimation, a considerable number of increasingly large LC models may have to be estimated before sufficient model-fit is achieved. A DLC model consists of a sequence of small LC models. Therefore, a DLC model can be estimated much faster and can easily utilize multiple processor cores, meaning that this model is more widely applicable and practical. In this study we describe the algorithm of fitting a DLC model, and discuss the various settings that indirectly influence the precision of a DLC model as a density estimation tool. These settings are illustrated using a synthetic data example, and the best performing algorithm is applied to a real-data example. The generated data example showed that, using specific decision rules, a DLC model is able to correctly model complex associations amongst categorical variables.  相似文献   
147.
In compositional data analysis, an observation is a vector containing nonnegative values, only the relative sizes of which are considered to be of interest. Without loss of generality, a compositional vector can be taken to be a vector of proportions that sum to one. Data of this type arise in many areas including geology, archaeology, biology, economics and political science. In this paper we investigate methods for classification of compositional data. Our approach centers on the idea of using the α-transformation to transform the data and then to classify the transformed data via regularized discriminant analysis and the k-nearest neighbors algorithm. Using the α-transformation generalizes two rival approaches in compositional data analysis, one (when α=1) that treats the data as though they were Euclidean, ignoring the compositional constraint, and another (when α = 0) that employs Aitchison’s centered log-ratio transformation. A numerical study with several real datasets shows that whether using α = 1 or α = 0 gives better classification performance depends on the dataset, and moreover that using an intermediate value of α can sometimes give better performance than using either 1 or 0.  相似文献   
148.
超高速平台前视阵雷达面临强杂波抑制难题,其杂波不仅具有显著的距离依赖性而且存在多重距离模糊,导致雷达动目标检测性能急剧恶化。针对这一问题,提出了一种基于频率分集阵列和多输入多输出雷达的距离角度多普勒三维自适应处理距离模糊杂波抑制方法。该方法通过发射接收二维联合空域和差波束及时域邻近多普勒通道进行降维处理,保证算法的低复杂度以及在小样本条件下的杂波抑制性能。仿真实验验证了所提方法的有效性。  相似文献   
149.
针对复杂机械系统振源数未知的欠定盲源分离(UBSS)问题,为提高欠定盲源分离的性能,提出一种基于平行因子分析(PARAFAC)和核一致诊断(CORCONDIA)的欠定盲源数估计算法.该算法利用二阶非平稳源分离的基本思想,将中心化传感器数据分成不重叠的数据块,计算各数据块的单一时延协方差矩阵并叠加成三阶张量,即平行因子模型.利用核一致诊断算法估计PARAFAC模型的最佳组分数,从而得到机械系统的振源数.仿真实验结果表明:该算法可从非平稳欠定混合信号中准确估计振源数目.将所提算法应用于多机振动源实验,结果进一步验证了该方法的有效性.  相似文献   
150.
短期负荷预测对电网运行意义重大,负荷预测的精确与否,对电力网络的控制、运行和计划有较大的影响。本文基于人工神经网络理论,通过建立网络模型,并编写相关程序,预测了未来一天24小时负荷值,并取得了较为理想的预测效果。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号