首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   903篇
  免费   114篇
  国内免费   79篇
系统科学   103篇
丛书文集   20篇
现状及发展   275篇
综合类   634篇
自然研究   64篇
  2024年   2篇
  2023年   11篇
  2022年   14篇
  2021年   17篇
  2020年   15篇
  2019年   40篇
  2018年   31篇
  2017年   39篇
  2016年   38篇
  2015年   56篇
  2014年   70篇
  2013年   80篇
  2012年   134篇
  2011年   56篇
  2010年   58篇
  2009年   89篇
  2008年   75篇
  2007年   79篇
  2006年   61篇
  2005年   36篇
  2004年   27篇
  2003年   20篇
  2002年   15篇
  2001年   14篇
  2000年   5篇
  1999年   13篇
  1955年   1篇
排序方式: 共有1096条查询结果,搜索用时 31 毫秒
941.
This paper concerns Long‐term forecasts for cointegrated processes. First, it considers the case where the parameters of the model are known. The paper analytically shows that neither cointegration nor integration constraint matters in Long‐term forecasts. It is an alternative implication of Long‐term forecasts for cointegrated processes, extending the results of previous influential studies. The appropriate Mote Carlo experiment supports our analytical result. Secondly, and more importantly, it considers the case where the parameters of the model are estimated. The paper shows that accuracy of the estimation of the drift term is crucial in Long‐term forecasts. Namely, the relative accuracy of various Long‐term forecasts depends upon the relative magnitude of variances of estimators of the drift term. It further experimentally shows that in finite samples the univariate ARIMA forecast, whose drift term is estimated by the simple time average of differenced data, is better than the cointegrated system forecast, whose parameters are estimated by the well‐known Johansen's ML method. Based upon finite sample experiments, it recommends the univariate ARIMA forecast rather than the conventional cointegrated system forecast in finite samples for its practical usefulness and robustness against model misspecifications. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
942.
This paper develops a state space framework for the statistical analysis of a class of locally stationary processes. The proposed Kalman filter approach provides a numerically efficient methodology for estimating and predicting locally stationary models and allows for the handling of missing values. It provides both exact and approximate maximum likelihood estimates. Furthermore, as suggested by the Monte Carlo simulations reported in this work, the performance of the proposed methodology is very good, even for relatively small sample sizes. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
943.
This paper applies the GARCH‐MIDAS (mixed data sampling) model to examine whether information contained in macroeconomic variables can help to predict short‐term and long‐term components of the return variance. A principal component analysis is used to incorporate the information contained in different variables. Our results show that including low‐frequency macroeconomic information in the GARCH‐MIDAS model improves the prediction ability of the model, particularly for the long‐term variance component. Moreover, the GARCH‐MIDAS model augmented with the first principal component outperforms all other specifications, indicating that the constructed principal component can be considered as a good proxy of the business cycle. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
944.
This paper develops a New‐Keynesian Dynamic Stochastic General Equilibrium (NKDSGE) model for forecasting the growth rate of output, inflation, and the nominal short‐term interest rate (91 days Treasury Bill rate) for the South African economy. The model is estimated via maximum likelihood technique for quarterly data over the period of 1970:1–2000:4. Based on a recursive estimation using the Kalman filter algorithm, out‐of‐sample forecasts from the NKDSGE model are compared with forecasts generated from the classical and Bayesian variants of vector autoregression (VAR) models for the period 2001:1–2006:4. The results indicate that in terms of out‐of‐sample forecasting, the NKDSGE model outperforms both the classical and Bayesian VARs for inflation, but not for output growth and nominal short‐term interest rate. However, differences in RMSEs are not significant across the models. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
945.
Exploring the Granger‐causation relationship is an important and interesting topic in the field of econometrics. In the traditional model we usually apply the short‐memory style to exhibit the relationship, but in practice there could be other different influence patterns. Besides the short‐memory relationship, Chen (2006) demonstrates a long‐memory relationship, in which a useful approach is provided for estimation where the time series are not necessarily fractionally co‐integrated. In that paper two different relationships (short‐memory and long‐memory relationship) are regarded whereby the influence flow is decayed by geometric, or cutting off, or harmonic sequences. However, it limits the model to the stationary relationship. This paper extends the influence flow to a non‐stationary relationship where the limitation is on ?0.5 ≤ d ≤ 1.0 and it can be used to detect whether the influence decays off (?0.5 ≤ d < 0.5) or is permanent (0.5 ≤ d ≤ 1.0). Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
946.
This paper investigates the effects of imposing invalid cointegration restrictions or ignoring valid ones on the estimation, testing and forecasting properties of the bivariate, first‐order, vector autoregressive (VAR(1)) model. We first consider nearly cointegrated VARs, that is, stable systems whose largest root, lmax, lies in the neighborhood of unity, while the other root, lmin, is safely smaller than unity. In this context, we define the ‘forecast cost of type I’ to be the deterioration in the forecasting accuracy of the VAR model due to the imposition of invalid cointegration restrictions. However, there are cases where misspecification arises for the opposite reasons, namely from ignoring cointegration when the true process is, in fact, cointegrated. Such cases can arise when lmax equals unity and lmin is less than but near to unity. The effects of this type of misspecification on forecasting will be referred to as ‘forecast cost of type II’. By means of Monte Carlo simulations, we measure both types of forecast cost in actual situations, where the researcher is led (or misled) by the usual unit root tests in choosing the unit root structure of the system. We consider VAR(1) processes driven by i.i.d. Gaussian or GARCH innovations. To distinguish between the effects of nonlinear dependence and those of leptokurtosis, we also consider processes driven by i.i.d. t(2) innovations. The simulation results reveal that the forecast cost of imposing invalid cointegration restrictions is substantial, especially for small samples. On the other hand, the forecast cost of ignoring valid cointegration restrictions is small but not negligible. In all the cases considered, both types of forecast cost increase with the intensity of GARCH effects. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
947.
Focusing on the interdependence of product categories we analyze multicategory buying decisions of households by a finite mixture of multivariate Tobit‐2 models with two response variables: purchase incidence and expenditure. Mixture components can be interpreted as household segments. Correlations for purchases of different categories turn out to be much more important than correlations among expenditures as well as correlations among purchases and expenditures of different categories. About 18% of all pairwise purchase correlations are significant. We compare the best‐performing large‐scale model with 28 categories to four small‐scale models each with seven categories. In our empirical study the large‐scale model clearly attains a better forecasting performance. The small‐scale models provide several biased correlations and miss about 50% of the significant correlations which the large scale model detects. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   
948.
In the era of Basel II a powerful tool for bankruptcy prognosis is vital for banks. The tool must be precise but also easily adaptable to the bank's objectives regarding the relation of false acceptances (Type I error) and false rejections (Type II error). We explore the suitability of smooth support vector machines (SSVM), and investigate how important factors such as the selection of appropriate accounting ratios (predictors), length of training period and structure of the training sample influence the precision of prediction. Moreover, we show that oversampling can be employed to control the trade‐off between error types, and we compare SSVM with both logistic and discriminant analysis. Finally, we illustrate graphically how different models can be used jointly to support the decision‐making process of loan officers. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
949.
This article compares the forecast accuracy of different methods, namely prediction markets, tipsters and betting odds, and assesses the ability of prediction markets and tipsters to generate profits systematically in a betting market. We present the results of an empirical study that uses data from 678–837 games of three seasons of the German premier soccer league. Prediction markets and betting odds perform equally well in terms of forecasting accuracy, but both methods strongly outperform tipsters. A weighting‐based combination of the forecasts of these methods leads to a slightly higher forecast accuracy, whereas a rule‐based combination improves forecast accuracy substantially. However, none of the forecasts leads to systematic monetary gains in betting markets because of the high fees (25%) charged by the state‐owned bookmaker in Germany. Lower fees (e.g., approximately 12% or 0%) would provide systematic profits if punters exploited the information from prediction markets and bet only on a selected number of games. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
950.
This paper integrates the scattered information on the life histories of the jumping plant lice or psyllids, examining those aspects of their biology that contribute to successful life cycle completion. Variation in life history parameters is reviewed across the world's psyllids and the relative importance of phylogeny and environment, including host‐plant growth strategy, in determining life history strategies is assessed. Elements of life cycles considered include: development rate and voltinism, response to high temperature and drought, cold‐hardiness and overwintering strategy, seasonal polymorphism, diapause, metabolism, host‐plant selection and range, phenological and other adaptations to host plants, disease transmission and host amelioration, dispersal, reproduction and mate finding. Life history parameters are analyzed for 342 species. While a phylogenetic signal can be identified within the data, the main drivers for life history adaptation are environmental temperatures and water availability, acting directly on the psyllids or mediated through their host plants.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号