首页 | 本学科首页   官方微博 | 高级检索  
     

人工智能决策可解释性的研究综述
引用本文:孔祥维,唐鑫泽,王子明. 人工智能决策可解释性的研究综述[J]. 系统工程理论与实践, 2021, 0(2): 524-536
作者姓名:孔祥维  唐鑫泽  王子明
作者单位:浙江大学管理学院
基金项目:国家自然科学基金面上项目(61772111);国家自然科学基金国际合作与交流项目(72010107002)。
摘    要:人工智能决策的性能在某些特定领域超过了人类能力,中国、美国等多国都颁布了人工智能发展战略和行动规划,期望人工智能在多个领域得到落地应用.但在人工智能决策过程中,存在着固有算法黑盒和系统信息不透明的问题,导致其结果正确但不可理解,阻碍了人工智能的进一步发展.为了人工智能的商用和普及,对智能决策可解释性的需求越来越迫切,需要将黑盒决策转化为透明过程,建立起人与机器之间的信任.本文从系统应用视角和决策收益者视角出发,重点对人工智能决策可解释性的基本概念、模型解释方法、高风险决策应用解释和解释方法评估等四个方面的国内外相关研究进行综述,并展望了未来研究发展趋势.

关 键 词:人工智能  智能决策  可解释性  用户信任  评估

A survey of explainable artificial intelligence decision
KONG Xiangwei,TANG Xinze,WANG Ziming. A survey of explainable artificial intelligence decision[J]. Systems Engineering —Theory & Practice, 2021, 0(2): 524-536
Authors:KONG Xiangwei  TANG Xinze  WANG Ziming
Affiliation:(School of Management,Zhejiang University,Hangzhou 310058,China)
Abstract:The performance of decision making by artificial intelligence has exceeded the capability of the human being in many specific domains.Countries like China and the USA have promulgated artificial intelligence development strategies and action plans to encourage the applications of artificial intelligence.In the artificial intelligence decision-making process,the inherent black-box algorithms and opaque system information lead to highly correct but incomprehensible results,which hinder the further development of artificial intelligence.For the commercialization and popularization of artificial intelligence,the need for explainability of intelligent decision-making is becoming more and more urgent.It is necessary to study the transformation of black-box decision-making into a transparent process and establish trust between humans and machines.From the perspective of system application and decision beneficiaries,this paper focuses on the domestic and foreign-related research on four aspects:The basic concepts of explainable artificial intelligence decision-making,explanation methods of black-box models,applications of explanation methods in high-risk domains and explanation methods evaluation.Meanwhile,we state insights into future research and development trends.
Keywords:artificial intelligence  intelligent decision-making  explainable artificial intelligence  user trust  evaluation
本文献已被 CNKI 维普 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号