首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
In previous works, I examine inferential methods employed in Probabilistic Weather Event Attribution studies (PEAs), and explored various ways they can be used to aid in climate policy decisions and decision-making about climate justice issues. This paper evaluates limitations of PEAs and considers how PEA researchers’ attributions of “liability” to specific countries for specific extreme weather events could be made more ethical. In sum, I show that it is routinely presupposed that PEA methods are not prone to inductive risks and presuppose that PEA researchers thus have no epistemic consequences or responsibilities for their attributions of liability. I argue that although PEAs are nevertheless crucially useful for practical decision-making, the attributions of liability made by PEA researchers are in fact prone to indicative risks and are influenced by non-epistemic values that PEA researchers should make transparent to make such studies more ethical. Finally, I outline possible normative approaches for making sciences, including PEAs, more ethical; and discuss implications of my arguments for the ongoing debate about how PEAs should guide climate policy and relevant legal decisions.  相似文献   

2.
Projections of future climate change cannot rely on a single model. It has become common to rely on multiple simulations generated by Multi-Model Ensembles (MMEs), especially to quantify the uncertainty about what would constitute an adequate model structure. But, as Parker points out (2018), one of the remaining philosophically interesting questions is: “How can ensemble studies be designed so that they probe uncertainty in desired ways?” This paper offers two interpretations of what General Circulation Models (GCMs) are and how MMEs made of GCMs should be designed. In the first interpretation, models are combinations of modules and parameterisations; an MME is obtained by “plugging and playing” with interchangeable modules and parameterisations. In the second interpretation, models are aggregations of expert judgements that result from a history of epistemic decisions made by scientists about the choice of representations; an MME is a sampling of expert judgements from modelling teams. We argue that, while the two interpretations involve distinct domains from philosophy of science and social epistemology, they both could be used in a complementary manner in order to explore ways of designing better MMEs.  相似文献   

3.
A question at the intersection of scientific modeling and public choice is how to deal with uncertainty about model predictions. This “high-level” uncertainty is necessarily value-laden, and thus must be treated as irreducibly subjective. Nevertheless, formal methods of uncertainty analysis should still be employed for the purpose of clarifying policy debates. I argue that such debates are best informed by models which integrate objective features (which model the world) with subjective ones (modeling the policy-maker). This integrated subjectivism is illustrated with a case study from the literature on monetary policy. The paper concludes with some morals for the use of models in determining climate policy.  相似文献   

4.
Density forecasts for weather variables are useful for the many industries exposed to weather risk. Weather ensemble predictions are generated from atmospheric models and consist of multiple future scenarios for a weather variable. The distribution of the scenarios can be used as a density forecast, which is needed for pricing weather derivatives. We consider one to 10‐day‐ahead density forecasts provided by temperature ensemble predictions. More specifically, we evaluate forecasts of the mean and quantiles of the density. The mean of the ensemble scenarios is the most accurate forecast for the mean of the density. We use quantile regression to debias the quantiles of the distribution of the ensemble scenarios. The resultant quantile forecasts compare favourably with those from a GARCH model. These results indicate the strong potential for the use of ensemble prediction in temperature density forecasting. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

5.
Philosophers continue to debate both the actual and the ideal roles of values in science. Recently, Eric Winsberg has offered a novel, model-based challenge to those who argue that the internal workings of science can and should be kept free from the influence of social values. He contends that model-based assignments of probability to hypotheses about future climate change are unavoidably influenced by social values. I raise two objections to Winsberg’s argument, neither of which can wholly undermine its conclusion but each of which suggests that his argument exaggerates the influence of social values on estimates of uncertainty in climate prediction. I then show how a more traditional challenge to the value-free ideal seems tailor-made for the climate context.  相似文献   

6.
Financial distress prediction (FDP) has been widely considered as a promising approach to reducing financial losses. While financial information comprises the traditional factors involved in FDP, nonfinancial factors have also been examined in recent studies. In light of this, the purpose of this study is to explore the integrated factors and multiple models that can improve the predictive performance of FDP models. This study proposes an FDP framework to reveal the financial distress features of listed Chinese companies, incorporating financial, management, and textual factors, and evaluating the prediction performance of multiple models in different time spans. To develop this framework, this study employs the wrapper-based feature selection method to extract valuable features, and then constructs multiple single classifiers, ensemble classifiers, and deep learning models in order to predict financial distress. The experiment results indicate that management and textual factors can supplement traditional financial factors in FDP, especially textual ones. This study also discovers that integrated factors collected 4 years prior to the predicted benchmark year enable a more accurate prediction, and the ensemble classifiers and deep learning models developed can achieve satisfactory FDP performance. This study makes a novel contribution as it expands the predictive factors of financial distress and provides new findings that can have important implications for providing early warning signals of financial risk.  相似文献   

7.
Case‐based reasoning (CBR) is considered a vital methodology in the current business forecasting area because of its simplicity, competitive performance with modern methods, and ease of pattern maintenance. Business failure prediction (BFP) is an effective tool that helps business people and entrepreneurs make more precise decisions in the current crisis. Using CBR as a basis for BFP can improve the tool's utility because CBR has the potential advantage in making predictions as well as suggestions compared with other methods. Recent studies indicate that an ensemble of various techniques has the possibility of improving the performance of predictive model. This research focuses on an early investigation on predicting business failure using a CBR ensemble (CBRE) forecasting method constructed from the use of random similarity functions (RSF), dubbed RSF‐based CBRE. Four issues are discussed: (i) the reasons for the use of RSF as the basis in the CBRE forecasting method for BFP; (ii) the means to construct the RSF‐based CBRE forecasting method for BFP; (iii) the empirical test on sensitivity of the RSF‐based CBRE to the number of member CBR predictors; and (iv) performance assessment of the ensemble forecasting method. Results of the RSF‐based CBRE forecasting method were statistically validated by comparing them with those of multivariate discriminant analysis, logistic regression, single CBR, and a linear support vector machine. The results from Chinese hotel BFP indicate that the RSF‐based CBRE forecasting method could significantly improve CBR's upper limit of predictive capability. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

8.
Non-epistemic values pervade climate modelling, as is now well documented and widely discussed in the philosophy of climate science. Recently, Parker and Winsberg have drawn attention to what can be termed “epistemic inequality”: this is the risk that climate models might more accurately represent the future climates of the geographical regions prioritised by the values of the modellers. In this paper, we promote value management as a way of overcoming epistemic inequality. We argue that value management can be seriously considered as soon as the value-free ideal and inductive risk arguments commonly used to frame the discussions of value influence in climate science are replaced by alternative social accounts of objectivity. We consider objectivity in Longino's sense as well as strong objectivity in Harding's sense to be relevant options here, because they offer concrete proposals that can guide scientific practice in evaluating and designing so-called multi-model ensembles and, in fine, improve their capacity to quantify and express uncertainty in climate projections.  相似文献   

9.
This study proposes Gaussian processes to forecast daily hotel occupancy at a city level. Unlike other studies in the tourism demand prediction literature, the hotel occupancy rate is predicted on a daily basis and 45 days ahead of time using online hotel room price data. A predictive framework is introduced that highlights feature extraction and selection of the independent variables. This approach shows that the dependence on internal hotel occupancy data can be removed by making use of a proxy measure for hotel occupancy rate at a city level. Six forecasting methods are investigated, including linear regression, autoregressive integrated moving average and recent machine learning methods. The results indicate that Gaussian processes offer the best tradeoff between accuracy and interpretation by providing prediction intervals in addition to point forecasts. It is shown how the proposed framework improves managerial decision making in tourism planning.  相似文献   

10.
The paper presents an identification procedure for a dynamic model of am hydrologic process. The process involves solute transport in streams subject to aquifer interaction and unsteady flows and the intended use of the model is prediction. Detailed assumptions and results are provided to illustrate the level of comprehensive analysis required to assess model adequacy. The assessment procedure easily generalizes to any dynamic model which is linear-in-the-parameters. As a fundamental tool, instrumental variable algorithms can be adopted which have a number of attractive features. These algorithms make both model-order identification and specification among alternatives a straightforward task. They are known to be consistent estimators in the presence of a wide class of errors. It is seen that they can be made stable and robust in the presence of data outliers. Instrumental variable algorithms can also be used which are asymptotically efficient and provide a covariance matrix of parameter estimates. The paper shows how they aid the quantification of predictive uncertainty and investigates the validity of the underlying assumptions. Further, it illustrates that, when instrumental variable algorithms are used in recursive mode, they can be used not only as an additional tool to access model inadequacy but also as an aid to model improvements.  相似文献   

11.
A variety of recent studies provide a skeptical view on the predictability of stock returns. Empirical evidence shows that most prediction models suffer from a loss of information, model uncertainty, and structural instability by relying on low‐dimensional information sets. In this study, we evaluate the predictive ability of various lately refined forecasting strategies, which handle these issues by incorporating information from many potential predictor variables simultaneously. We investigate whether forecasting strategies that (i) combine information and (ii) combine individual forecasts are useful to predict US stock returns, that is, the market excess return, size, value, and the momentum premium. Our results show that methods combining information have remarkable in‐sample predictive ability. However, the out‐of‐sample performance suffers from highly volatile forecast errors. Forecast combinations face a better bias–efficiency trade‐off, yielding a consistently superior forecast performance for the market excess return and the size premium even after the 1970s.  相似文献   

12.
We compare the predictive ability of Bayesian methods which deal simultaneously with model uncertainty and correlated regressors in the framework of cross‐country growth regressions. In particular, we assess methods with spike and slab priors combined with different prior specifications for the slope parameters in the slab. Our results indicate that moving away from Gaussian g‐priors towards Bayesian ridge, LASSO or elastic net specifications has clear advantages for prediction when dealing with datasets of (potentially highly) correlated regressors, a pervasive characteristic of the data used hitherto in the econometric literature. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

13.
More and more ensemble models are used to forecast business failure. It is generally known that the performance of an ensemble relies heavily on the diversity between each base classifier. To achieve diversity, this study uses kernel‐based fuzzy c‐means (KFCM) to organize firm samples and designs a hierarchical selective ensemble model for business failure prediction (BFP). First, three KFCM methods—Gaussian KFCM (GFCM), polynomial KFCM (PFCM), and Hyper‐tangent KFCM (HFCM)—are employed to partition the financial data set into three data sets. A neural network (NN) is then adopted as a basis classifier for BFP, and three sets, which are derived from three KFCM methods, are used to build three classifier pools. Next, classifiers are fused by the two‐layer hierarchical selective ensemble method. In the first layer, classifiers are ranked based on their prediction accuracy. The stepwise forward selection method is employed to selectively integrate classifiers according to their accuracy. In the second layer, three selective ensembles in the first layer are integrated again to acquire the final verdict. This study employs financial data from Chinese listed companies to conduct empirical research, and makes a comparative analysis with other ensemble models and all its component models. It is the conclusion that the two‐layer hierarchical selective ensemble is good at forecasting business failure.  相似文献   

14.
The problem of forecasting from vector autoregressive models has attracted considerable attention in the literature. The most popular non‐Bayesian approaches use either asymptotic approximations or bootstrapping to evaluate the uncertainty associated with the forecast. The practice in the empirical literature has been to assess the uncertainty of multi‐step forecasts by connecting the intervals constructed for individual forecast periods. This paper proposes a bootstrap method of constructing prediction bands for forecast paths. The bands are constructed from forecast paths obtained in bootstrap replications using an optimization procedure to find the envelope of the most concentrated paths. From extensive Monte Carlo study, it is found that the proposed method provides more accurate assessment of predictive uncertainty from the vector autoregressive model than its competitors. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

15.
A large number of models have been developed in the literature to analyze and forecast changes in output dynamics. The objective of this paper was to compare the predictive ability of univariate and bivariate models, in terms of forecasting US gross national product (GNP) growth at different forecasting horizons, with the bivariate models containing information on a measure of economic uncertainty. Based on point and density forecast accuracy measures, as well as on equal predictive ability (EPA) and superior predictive ability (SPA) tests, we evaluate the relative forecasting performance of different model specifications over the quarterly period of 1919:Q2 until 2014:Q4. We find that the economic policy uncertainty (EPU) index should improve the accuracy of US GNP growth forecasts in bivariate models. We also find that the EPU exhibits similar forecasting ability to the term spread and outperforms other uncertainty measures such as the volatility index and geopolitical risk in predicting US recessions. While the Markov switching time‐varying parameter vector autoregressive model yields the lowest values for the root mean squared error in most cases, we observe relatively low values for the log predictive density score, when using the Bayesian vector regression model with stochastic volatility. More importantly, our results highlight the importance of uncertainty in forecasting US GNP growth rates.  相似文献   

16.
Most long memory forecasting studies assume that long memory is generated by the fractional difference operator. We argue that the most cited theoretical arguments for the presence of long memory do not imply the fractional difference operator and assess the performance of the autoregressive fractionally integrated moving average (ARFIMA) model when forecasting series with long memory generated by nonfractional models. We find that ARFIMA models dominate in forecast performance regardless of the long memory generating mechanism and forecast horizon. Nonetheless, forecasting uncertainty at the shortest forecast horizon could make short memory models provide suitable forecast performance, particularly for smaller degrees of memory. Additionally, we analyze the forecasting performance of the heterogeneous autoregressive (HAR) model, which imposes restrictions on high-order AR models. We find that the structure imposed by the HAR model produces better short and medium horizon forecasts than unconstrained AR models of the same order. Our results have implications for, among others, climate econometrics and financial econometrics models dealing with long memory series at different forecast horizons.  相似文献   

17.
Google Trends data is a dataset increasingly employed for many statistical investigations. However, care should be placed in handling this tool, especially when applied for quantitative prediction purposes. Being by design Internet user dependent, estimators based on Google Trends data embody many sources of uncertainty and instability. They are related, for example, to technical (e.g., cross-regional disparities in the degree of computer alphabetization, time dependency of Internet users), psychological (e.g., emotionally driven spikes and other form of data perturbations), linguistic (e.g., noise generated by double-meaning words). Despite the stimulating literature available today on how to use Google Trends data as a forecasting tool, surprisingly, to the best of the author's knowledge, it appears that to date no articles specifically devoted to the prediction of these data have been published. In this paper, a novel forecasting method, based on a denoiser of the wavelet type employed in conjunction with a forecasting model of the class SARIMA (seasonal autoregressive integrated moving average), is presented. The wavelet filter is iteratively calibrated according to a bounded search algorithm, until a minimum of a suitable loss function is reached. Finally, empirical evidence is presented to support the validity of the proposed method.  相似文献   

18.
19.
Accurate demand prediction is of great importance in the electricity supply industry. Electricity cannot be stored, and generating plant must be scheduled well in advance to meet future demand. Up to now, where online information about external conditions is unavailable, time series methods on the historical demand series have been used for short-term demand prediction. These have drawbacks, both in their sensitivity to changing weather conditions and in their poor modelling of the daily/weekly business cycles. To overcome these problems a framework has been constructed whereby forecasts from different prediction methods and different forecasting origins can be selected and combined, solely on the basis of recent forecasting performance, with no a priori assumptions of demand behaviour. This added flexibility in univariate forecasting provides a significant improvement in prediction accuracy.  相似文献   

20.
In this study, new variants of genetic programming (GP), namely gene expression programming (GEP) and multi‐expression programming (MEP), are utilized to build models for bankruptcy prediction. Generalized relationships are obtained to classify samples of 136 bankrupt and non‐bankrupt Iranian corporations based on their financial ratios. An important contribution of this paper is to identify the effective predictive financial ratios on the basis of an extensive bankruptcy prediction literature review and upon a sequential feature selection analysis. The predictive performance of the GEP and MEP forecasting methods is compared with the performance of traditional statistical methods and a generalized regression neural network. The proposed GEP and MEP models are effectively capable of classifying bankrupt and non‐bankrupt firms and outperform the models developed using other methods. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号