首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5427篇
  免费   620篇
系统科学   1076篇
理论与方法论   275篇
现状及发展   868篇
研究方法   1篇
综合类   3827篇
  2022年   2篇
  2021年   1篇
  2018年   727篇
  2017年   732篇
  2016年   430篇
  2015年   31篇
  2014年   3篇
  2013年   2篇
  2012年   256篇
  2011年   951篇
  2010年   814篇
  2009年   454篇
  2008年   507篇
  2007年   762篇
  2006年   2篇
  2005年   41篇
  2004年   121篇
  2003年   149篇
  2002年   55篇
  2000年   1篇
  1994年   2篇
  1991年   3篇
  1967年   1篇
排序方式: 共有6047条查询结果,搜索用时 156 毫秒
801.
Tracking mobile nodes in dynamic and noisy conditions of industrial environments has provided a paradigm for many issues inherent in the area of distributed control systems in general and wireless sensor networks in particular. Due to the dynamic nature of the industrial environments, a practical tracking system is required that is adaptable to the changes in the environment. More specifically, given the limited resources of wireless nodes and the challenges created by harsh industrial environments there is a need for a technique that can modify the configuration of the system on the fly as new wireless nodes are added to the network and obsolete ones are removed. To address these issues, two cluster-based tracking systems, one static and the other dynamic, are proposed to organize the overall network field into a set of tracking zones, each composed of a sink node and a set of corresponding anchor nodes. To manage the wireless nodes activities and inter and intra cluster communications, an agent-based technique is employed. To compare the architectures, we report on a set of experiments performed in JADE (Java Agent Development Environment). In these experiments, we compare two agent-based approaches (dynamic and static) for managing clusters of wireless sensor nodes in a distributed tracking system. The experimental results corroborate the efficiency of the static clusters versus the robustness and effectiveness of the dynamic clusters.  相似文献   
802.
An effective prognostic program is crucial to the predictive maintenance of complex equipment since it can improve productivity, prolong equipment life, and enhance system safety. This paper proposes a novel technique for accurate failure prognosis based on back propagation neural network and quantum multi-agent algorithm. Inspired by the extensive research of quantum computing theory and multi-agent systems, the technique employs a quantum multi-agent strategy, with the main characteristics of quantum agent representation and several operations including fitness evaluation, cooperation, crossover and mutation, for parameters optimization of neural network to avoid the deficiencies such as slow convergence and liability of getting stuck to local minima. To validate the feasibility of the proposed approach, several numerical approximation experiments were firstly designed, after which real vibrational data of bearings from the Laboratory of Cincinnati University were analyzed and used to assess the health condition for a given future point. The results were rather encouraging and indicated that the presented forecasting method has the potential to be utilized as an estimation tool for failure prediction in industrial machinery.  相似文献   
803.
Although many theories regarding the implementation of knowledge management (KM) in organizations have been proposed and studied, most applications tend to stand alone without incorporating the business processes. Different categories of knowledge provide different benefits and how to integrate various categories of KM into a hybrid approach as an effective KM manner remains strategically important, and yet is still understudied. Therefore, in this paper a hybrid model that integrates principal KM applications for new service development (NSD) and the measurement of the resulting financial benefits have been developed. The proposed KM model incorporates newsgroups, knowledge forums, knowledge asset management and knowledge application processes as a hybrid means for sharing organizational knowledge along two axes, explicit vs. implicit and individual vs. collective. One of the largest management consulting companies in Taiwan, China, whose process model of NSD was standing alone with KM applications, was selected for the case study. A set of hybrid KM processes was developed to implement the proposed KM model, and it illustrates an application with greater financial benefits for integrating hybrid KM practices into the business process. Based on knowledge value added (KVA) validation, the proposed KM model provides a new operating system for sharing NSD knowledge within an organization. Through the case study by measuring the achieved financial results, the proposed KM model is found to provide an exclusive hybrid platform with an empirical process model to address innovative approaches and practical values of KM within an organization.  相似文献   
804.
Data envelopment analysis (DEA) is an effective non-parametric method for measuring the relative efficiencies of decision making units (DMUs) with multiple inputs and outputs. In many real situations, the internal structure of DMUs is a two-stage network process with shared inputs used in both stages and common outputs produced by the both stages. For example, hospitals have a two-stage network structure. Stage 1 consumes resources such as information technology system, plant, equipment and admin personnel to generate outputs such as medical records, laundry and housekeeping. Stage 2 consumes the same set of resources used by stage 1 (named shared inputs) and the outputs generated by stage 1 (named intermediate measures) to provide patient services. Besides, some of outputs, for instance, patient satisfaction degrees, are generated by the two individual stages together (named shared outputs). Since some of shared inputs and outputs are hard split up and allocated to each individual stage, it needs to develop two-stage DEA methods for evaluating the performance of two-stage network processes in such problems. This paper extends the centralized model to measure the DEA efficiency of the two-stage process with non splittable shared inputs and outputs. A weighted additive approach is used to combine the two individual stages. Moreover, additive efficiency decomposition models are developed to simultaneously evaluate the maximal and the minimal achievable efficiencies for the individual stages. Finally, an example of 17 city branches of China Construction Bank in Anhui Province is employed to illustrate the proposed approach.  相似文献   
805.
This paper introduces a novel mixture model-based approach to the simultaneous clustering and optimal segmentation of functional data, which are curves presenting regime changes. The proposed model consists of a finite mixture of piecewise polynomial regression models. Each piecewise polynomial regression model is associated with a cluster, and within each cluster, each piecewise polynomial component is associated with a regime (i.e., a segment). We derive two approaches to learning the model parameters: the first is an estimation approach which maximizes the observed-data likelihood via a dedicated expectation-maximization (EM) algorithm, then yielding a fuzzy partition of the curves into K clusters obtained at convergence by maximizing the posterior cluster probabilities. The second is a classification approach and optimizes a specific classification likelihood criterion through a dedicated classification expectation-maximization (CEM) algorithm. The optimal curve segmentation is performed by using dynamic programming. In the classification approach, both the curve clustering and the optimal segmentation are performed simultaneously as the CEM learning proceeds. We show that the classification approach is a probabilistic version generalizing the deterministic K-means-like algorithm proposed in Hébrail, Hugueney, Lechevallier, and Rossi (2010). The proposed approach is evaluated using simulated curves and real-world curves. Comparisons with alternatives including regression mixture models and the K-means-like algorithm for piecewise regression demonstrate the effectiveness of the proposed approach.  相似文献   
806.
Cognitive diagnostic models provide valuable information on whether a student has mastered each of the attributes a test intends to evaluate. Despite its generality, the generalized DINA model allows for the possibility of lower correct rates for students who master more attributes than those who know less. This paper considers the use of order-constrained parameter space of the G-DINA model to avoid such a counter-intuitive phenomenon and proposes two algorithms, the upward and downward methods, for parameter estimation. Through simulation studies, we compare the accuracy in parameter estimation and in classification of attribute patterns obtained from the proposed two algorithms and the current approach when the restricted parameter space is true. Our results show that the upward method performs the best among the three, and therefore it is recommended for estimation, regardless of the distribution of respondents’ attribute patterns, types of test items, and the sample size of the data.  相似文献   
807.
Traditionally latent class (LC) analysis is used by applied researchers as a tool for identifying substantively meaningful clusters. More recently, LC models have also been used as a density estimation tool for categorical variables. We introduce a divisive LC (DLC) model as a density estimation tool that may offer several advantages in comparison to a standard LC model. When using an LC model for density estimation, a considerable number of increasingly large LC models may have to be estimated before sufficient model-fit is achieved. A DLC model consists of a sequence of small LC models. Therefore, a DLC model can be estimated much faster and can easily utilize multiple processor cores, meaning that this model is more widely applicable and practical. In this study we describe the algorithm of fitting a DLC model, and discuss the various settings that indirectly influence the precision of a DLC model as a density estimation tool. These settings are illustrated using a synthetic data example, and the best performing algorithm is applied to a real-data example. The generated data example showed that, using specific decision rules, a DLC model is able to correctly model complex associations amongst categorical variables.  相似文献   
808.
In compositional data analysis, an observation is a vector containing nonnegative values, only the relative sizes of which are considered to be of interest. Without loss of generality, a compositional vector can be taken to be a vector of proportions that sum to one. Data of this type arise in many areas including geology, archaeology, biology, economics and political science. In this paper we investigate methods for classification of compositional data. Our approach centers on the idea of using the α-transformation to transform the data and then to classify the transformed data via regularized discriminant analysis and the k-nearest neighbors algorithm. Using the α-transformation generalizes two rival approaches in compositional data analysis, one (when α=1) that treats the data as though they were Euclidean, ignoring the compositional constraint, and another (when α = 0) that employs Aitchison’s centered log-ratio transformation. A numerical study with several real datasets shows that whether using α = 1 or α = 0 gives better classification performance depends on the dataset, and moreover that using an intermediate value of α can sometimes give better performance than using either 1 or 0.  相似文献   
809.
This final reply responds to Honohan’s invitation to articulate the Arendtian tone of the key-note paper. It spells out the philosophical intuition that the political life of citizens, at least potentially, is capable of making visible what makes human life worthwhile and fully meaningful, and the philosophical curiosity to see whether traces of this deep political awareness can be retrieved in dialogues with volunteers. In response to Dekker’s critical doubts, this final reply clarifies the central stakes of Claes’s paper. The core argument was not to show that the biographical model of meaningfulness is the prevailing approach of meaning in/of volunteering, but to assess the potentials and limits of the model’s interpretive power. Moreover, the paper argues for an alternative, existential model of meaningfulness. This approach refers to deep experiences of meaning that emerge from the practice of volunteering and that shift into powerful political experiences of hope, and a lived sense of equality.  相似文献   
810.
According to Wolf’s fitting fulfillment view, meaningfulness depends on the person’s subjective attraction to an activity being grounded in ‘reasons of love’ that concern the objective value of those activities. In this short comment, I argue that ‘reasons of love’—and thus reasons for regarding as meaningful—are not limited to those having to do with the objective value of activities and relationships, but include also what I call ‘reasons for the initiated’ and ‘reasons for me’.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号