首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The notion of template has been advocated by Paul Humphreys and others as an illuminating unit of analysis in the philosophy of scientific modelling. Templates are supposed to have the dual functions of representing target systems and of facilitating quantitative manipulation. A resulting worry is that wide-ranging cross-disciplinary use of templates might compromise their representational function and reduce them to mere formalisms. In this paper, we argue that templates are valuable units of analysis in reconstructing cross-disciplinary modelling. Central to our discussion are the ways in which Lotka-Volterra models are used to analyse processes of technology diffusion. We illuminate both the similarities and differences between contributions to this case of cross-disciplinary modelling by reconstructing them as transfer of a template, without reducing the template to a mere formalism or a computational model. This requires differentiating the interpretation of templates from that of the models based on them. This differentiation allows us to claim that the LV models of technology diffusion that we review are the result of template transfer - conformist in some contributions, creative in others.  相似文献   

2.
The difficulty in modelling inflation and the significance in discovering the underlying data‐generating process of inflation is expressed in an extensive literature regarding inflation forecasting. In this paper we evaluate nonlinear machine learning and econometric methodologies in forecasting US inflation based on autoregressive and structural models of the term structure. We employ two nonlinear methodologies: the econometric least absolute shrinkage and selection operator (LASSO) and the machine‐learning support vector regression (SVR) method. The SVR has never been used before in inflation forecasting considering the term spread as a regressor. In doing so, we use a long monthly dataset spanning the period 1871:1–2015:3 that covers the entire history of inflation in the US economy. For comparison purposes we also use ordinary least squares regression models as a benchmark. In order to evaluate the contribution of the term spread in inflation forecasting in different time periods, we measure the out‐of‐sample forecasting performance of all models using rolling window regressions. Considering various forecasting horizons, the empirical evidence suggests that the structural models do not outperform the autoregressive ones, regardless of the model's method. Thus we conclude that the term spread models are not more accurate than autoregressive models in inflation forecasting. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

3.
In his 1966 paper “The Strategy of model-building in Population Biology”, Richard Levins argues that no single model in population biology can be maximally realistic, precise and general at the same time. This is because these desirable model properties trade-off against one another. Recently, philosophers have developed Levins’ claims, arguing that trade-offs between these desiderata are generated by practical limitations on scientists, or due to formal aspects of models and how they represent the world. However this project is not complete. The trade-offs discussed by Levins had a noticeable effect on modelling in population biology, but not on other sciences. This raises questions regarding why such a difference holds. I claim that in order to explain this finding, we must pay due attention to the properties of the systems, or targets modelled by the different branches of science.  相似文献   

4.
研究一个网络系统抵抗干扰的能力,系统科学里有"鲁棒性"的概念,社会—生态系统里用"脆弱性"、"恢复性"和"适应性"等概念.这些已有的概念是否已经全面地描述了系统抵抗干扰的能力呢?在综合灾害风险管理的实践中,整个社会凝心聚力,行动协调一致,往往发挥着至关重要的决定性作用.然而,现有学术研究中所用到的各种系统属性,都不能很好表述一个系统凝心聚力的能力或水平.鉴于此,本文提出一种全新的网络系统属性:网络凝聚度(consilience degree),专门用以度量一个如社会—生态系统一样行为的网络系统凝心聚力,行动协调一致,以抵抗干扰的能力.网络凝聚度实际上是一种更具普遍意义的"联结度".它可以象联结度一样,派生发展出一系列的系统新属性和网络新模型,从而形成一个研究复杂系统的新的理论体系.本文将重点阐述这个体系的雏型.理论分析和仿真研究都证明:本文所提出的网络凝聚度是现有各种系统属性所无法涵盖或替代的,是研究现实复杂系统所必需的新理论工具.  相似文献   

5.
In recent years, considerable attention has focused on modelling and forecasting stock market volatility. Stock market volatility matters because stock markets are an integral part of the financial architecture in market economies and play a key role in channelling funds from savers to investors. The focus of this paper is on forecasting stock market volatility in Central and East European (CEE) countries. The obvious question to pose, therefore, is how volatility can be forecast and whether one technique consistently outperforms other techniques. Over the years a variety of techniques have been developed, ranging from the relatively simple to the more complex conditional heteroscedastic models of the GARCH family. In this paper we test the predictive power of 12 models to forecast volatility in the CEE countries. Our results confirm that models which allow for asymmetric volatility consistently outperform all other models considered. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
Our paper challenges the conventional wisdom that the flat maximum inflicts the ‘curse of insensitivity’ on the modelling of judgement and decision processes. In particular, we argue that this widely demonstrated failure on the part of conventional statistical methods to differentiate between competing models has a useful role to play in the development of accessible and economical applied systems, since it allows a low cost choice between systems which vary in their cognitive demands on the user and in their ease of development and implementation. To illustrate our thesis, we take two recent applications of linear scoring models used for credit scoring and for the prediction of sudden infant death. The paper discusses the nature and determinants of the flat maximum as well as its role in applied cognition. Other sections mention certain unanswered questions about the development of linear scoring models and briefly describe competing formulations for prediction.  相似文献   

7.
To study climate change, scientists employ computer models, which approximate target systems with various levels of skill. Given the imperfection of climate models, how do scientists use simulations to generate knowledge about the causes of observed climate change? Addressing a similar question in the context of biological modelling, Levins (1966) proposed an account grounded in robustness analysis. Recent philosophical discussions dispute the confirmatory power of robustness, raising the question of how the results of computer modelling studies contribute to the body of evidence supporting hypotheses about climate change. Expanding on Staley’s (2004) distinction between evidential strength and security, and Lloyd’s (2015) argument connecting variety-of-evidence inferences and robustness analysis, I address this question with respect to recent challenges to the epistemology robustness analysis. Applying this epistemology to case studies of climate change, I argue that, despite imperfections in climate models, and epistemic constraints on variety-of-evidence reasoning and robustness analysis, this framework accounts for the strength and security of evidence supporting climatological inferences, including the finding that global warming is occurring and its primary causes are anthropogenic.  相似文献   

8.
Recent financial research has provided evidence on the predictability of asset returns. In this paper we consider the results contained in Pesaran and Timmerman (1995), which provided evidence on predictability of excess returns in the US stock market over the sample 1959–1992. We show that the extension of the sample to the nineties weakens considerably the statistical and economic significance of the predictability of stock returns based on earlier data. We propose an extension of their framework, based on the explicit consideration of model uncertainty under rich parameterizations for the predictive models. We propose a novel methodology to deal with model uncertainty based on ‘thick’ modelling, i.e. on considering a multiplicity of predictive models rather than a single predictive model. We show that portfolio allocations based on a thick modelling strategy systematically outperform thin modelling. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

9.
The recent discussion on scientific representation has focused on models and their relationship to the real world. It has been assumed that models give us knowledge because they represent their supposed real target systems. However, here agreement among philosophers of science has tended to end as they have presented widely different views on how representation should be understood. I will argue that the traditional representational approach is too limiting as regards the epistemic value of modelling given the focus on the relationship between a single model and its supposed target system, and the neglect of the actual representational means with which scientists construct models. I therefore suggest an alternative account of models as epistemic tools. This amounts to regarding them as concrete artefacts that are built by specific representational means and are constrained by their design in such a way that they facilitate the study of certain scientific questions, and learning from them by means of construction and manipulation.  相似文献   

10.
Approaches to the Internalism–Externalism controversy in the philosophy of mind often involve both (broadly) metaphysical and explanatory considerations. Whereas originally most emphasis seems to have been placed on metaphysical concerns, recently the explanation angle is getting more attention. Explanatory considerations promise to offer more neutral grounds for cognitive systems demarcation than (broadly) metaphysical ones. However, it has been argued that explanation-based approaches are incapable of determining the plausibility of internalist-based conceptions of cognition vis-à-vis externalist ones. On this perspective, improved metaphysics is the route along which to solve the Internalist–Externalist stalemate. In this paper we challenge this claim. Although we agree that explanation-orientated approaches have indeed so far failed to deliver solid means for cognitive system demarcation, we elaborate a more promising explanation-oriented framework to address this issue. We argue that the mutual manipulability account of constitutive relevance in mechanisms, extended with the criterion of ‘fat-handedness’, is capable of plausibly addressing the cognitive systems demarcation problem, and thus able to decide on the explanatory traction of Internalist vs. Externalist conceptions, on a case-by-case basis. Our analysis also highlights why some other recent mechanistic takes on the problem of cognitive systems demarcation have been unsuccessful. We illustrate our claims with a case on gestures and learning.  相似文献   

11.
We examine the interrelationships between analog computational modelling and analogue (physical) modelling. To this end, we attempt a regimentation of the informal distinction between analog and digital, which turns on the consideration of computing in a broader context. We argue that in doing so, one comes to see that (scientific) computation is better conceptualised as an epistemic process relative to agents, wherein representations play a key role. We distinguish between two, conceptually distinct, kinds of representation that, we argue, are both involved in each case of computing. Based on the semantic and syntactic properties of each of these representations, we put forward a new account of the distinction between analog and digital computing. We discuss how the developed account is able to explain various properties of different models of computation, and we conceptually compare analog computational modelling to analogue (scale) modelling. It is concluded that, contrary to the standard view, the two practices are orthogonal, differing both in their foundations and in the epistemic functions they fulfil.  相似文献   

12.
The development of evolutionary game theory (EGT) is closely linked with two interdisciplinary exchanges: the import of game theory into biology, and the import of biologists’ version of game theory into economics. This paper traces the history of these two import episodes. In each case the investigation covers what exactly was imported, what the motives for the import were, how the imported elements were put to use, and how they related to existing practices in the respective disciplines. Two conclusions emerged from this study. First, concepts derived from the unity of science discussion or the unification accounts of explanation are too strong and too narrow to be useful for analysing these interdisciplinary exchanges. Secondly, biology and economics—at least in relation to EGT—show significant differences in modelling practices: biologists seek to link EGT models to concrete empirical situations, whereas economists pursue conceptual exploration and possible explanation.  相似文献   

13.
Recent philosophy of science has seen a number of attempts to understand scientific models by looking to theories of fiction. In previous work, I have offered an account of models that draws on Kendall Walton’s ‘make-believe’ theory of art. According to this account, models function as ‘props’ in games of make-believe, like children’s dolls or toy trucks. In this paper, I assess the make-believe view through an empirical study of molecular models. I suggest that the view gains support when we look at the way that these models are used and the attitude that users take towards them. Users’ interaction with molecular models suggests that they do imagine the models to be molecules, in much the same way that children imagine a doll to be a baby. Furthermore, I argue, users of molecular models imagine themselves viewing and manipulating molecules, just as children playing with a doll might imagine themselves looking at a baby or feeding it. Recognising this ‘participation’ in modelling, I suggest, points towards a new account of how models are used to learn about the world, and helps us to understand the value that scientists sometimes place on three-dimensional, physical models over other forms of representation.  相似文献   

14.
I analyse the construction and transfer of models in complexity science. Thereby, I introduce a distinction between (i) vertical model construction, which is based on knowledge about a specific target system, (ii) horizontal model construction, which is based on the alteration of an existing model and therefore does not require any references to a specific target system; and (iii) the transfer of models, which consists of the assignment of an existing model to a new target system. I argue that, in complexity science, all three of those modelling activities take place. Furthermore, I show that these activities can be divided into two general categories: (i) the creation of a repository of models without specific target systems, which have been created by large-scale horizontal construction; and (ii) the transfer of these models to particular target systems in the natural sciences, which can also be followed by an extension of the transferred model through vertical construction of adaptions and additions to its dynamics. I then argue that this interplay of different modelling activities in complexity science provides a mechanism for the transfer of knowledge between different scientific fields. It is also crucial to the interdisciplinary nature of complexity science.  相似文献   

15.
“Colligation”, a term first introduced in philosophy of science by William Whewell (1840), today sparks a renewed interest beyond Whewell scholarship. In this paper, we argue that adopting the notion of colligation in current debates in philosophy of science can contribute to our understanding of scientific models. Specifically, studying colligation allows us to have a better grasp of how integrating diverse model components (empirical data, theory, useful idealization, visual and other representational resources) in a creative way may produce novel generalizations about the phenomenon investigated. Our argument is built both on the theoretical appraisal of Whewell’s philosophy of science and the historical rehabilitation of his scientific work on tides. Adopting a philosophy of science in practice perspective, we show how colligation emerged from Whewell’s empirical work on tides. The production of idealized maps (“cotidal maps”) illustrates the unifying and creative power of the activity of colligating in scientific practice. We show the importance of colligation in modelling practices more generally by looking at its epistemic role in the construction of the San Francisco Bay Model.  相似文献   

16.
Projections of future climate change cannot rely on a single model. It has become common to rely on multiple simulations generated by Multi-Model Ensembles (MMEs), especially to quantify the uncertainty about what would constitute an adequate model structure. But, as Parker points out (2018), one of the remaining philosophically interesting questions is: “How can ensemble studies be designed so that they probe uncertainty in desired ways?” This paper offers two interpretations of what General Circulation Models (GCMs) are and how MMEs made of GCMs should be designed. In the first interpretation, models are combinations of modules and parameterisations; an MME is obtained by “plugging and playing” with interchangeable modules and parameterisations. In the second interpretation, models are aggregations of expert judgements that result from a history of epistemic decisions made by scientists about the choice of representations; an MME is a sampling of expert judgements from modelling teams. We argue that, while the two interpretations involve distinct domains from philosophy of science and social epistemology, they both could be used in a complementary manner in order to explore ways of designing better MMEs.  相似文献   

17.
Summary The long-term safety of proposed repositories for nuclear waste is demonstrated by the use of chains of mathematical models describing the performance of the various barriers to radionuclide mobilisation, transport, release into the biosphere and eventual uptake by man. Microbial contamination of such repositories is to be expected, and hence the extent and consequences of microbial activity must also be quantified. This paper describes a modelling approach to determine the maximum microbial activity in the near field of a repository, which can thus be related to maximum possible degradation of performance. The approach is illustrated by application to a proposed Swiss repository for low- and intermediate-level waste (L/ILW), which is immobilised in concrete and emplaced in a marl host rock.  相似文献   

18.
This paper is concerned with modelling time series by single hidden layer feedforward neural network models. A coherent modelling strategy based on statistical inference is presented. Variable selection is carried out using simple existing techniques. The problem of selecting the number of hidden units is solved by sequentially applying Lagrange multiplier type tests, with the aim of avoiding the estimation of unidentified models. Misspecification tests are derived for evaluating an estimated neural network model. All the tests are entirely based on auxiliary regressions and are easily implemented. A small‐sample simulation experiment is carried out to show how the proposed modelling strategy works and how the misspecification tests behave in small samples. Two applications to real time series, one univariate and the other multivariate, are considered as well. Sets of one‐step‐ahead forecasts are constructed and forecast accuracy is compared with that of other nonlinear models applied to the same series. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

19.
Crustacean neuropeptides   总被引:2,自引:0,他引:2  
Crustaceans have long been used for peptide research. For example, the process of neurosecretion was first formally demonstrated in the crustacean X-organ–sinus gland system, and the first fully characterized invertebrate neuropeptide was from a shrimp. Moreover, the crustacean stomatogastric and cardiac nervous systems have long served as models for understanding the general principles governing neural circuit functioning, including modulation by peptides. Here, we review the basic biology of crustacean neuropeptides, discuss methodologies currently driving their discovery, provide an overview of the known families, and summarize recent data on their control of physiology and behavior.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号