首页 | 本学科首页   官方微博 | 高级检索  
     检索      

基于LSTM的大规模知识库自动问答
引用本文:周博通,孙承杰,林磊,刘秉权.基于LSTM的大规模知识库自动问答[J].北京大学学报(自然科学版),2018,54(2):286-292.
作者姓名:周博通  孙承杰  林磊  刘秉权
作者单位:哈尔滨工业大学计算机科学与技术学院,哈尔滨,150001;哈尔滨工业大学计算机科学与技术学院,哈尔滨,150001;哈尔滨工业大学计算机科学与技术学院,哈尔滨,150001;哈尔滨工业大学计算机科学与技术学院,哈尔滨,150001
基金项目:国家高技术研究发展计划专项经费,国家自然科学基金
摘    要:针对大规模知识库问答的特点, 构建一个包含3个主要步骤的问答系统: 问句中的命名实体识别、问句与属性的映射和答案选择。采用别名词典结合LSTM语言模型进行命名实体识别, 使用双向LSTM模型结合两种不同的注意力机制进行属性映射, 最后综合前两步的结果进行实体消歧和答案选择。该系统在NLPCC-ICCPOL 2016 KBQA任务提供的数据集上的平均F1值为0.8106, 接近评测的最好水平。

关 键 词:知识库  自动问答  命名实体识别  注意力机制
收稿时间:2017-06-04

LSTM Based Question Answering for Large Scale Knowledge Base
ZHOU Botong,SUN Chengjie,LIN Lei,LIU Bingquan.LSTM Based Question Answering for Large Scale Knowledge Base[J].Acta Scientiarum Naturalium Universitatis Pekinensis,2018,54(2):286-292.
Authors:ZHOU Botong  SUN Chengjie  LIN Lei  LIU Bingquan
Institution:School of Computer Science and Technology, Harbin Institute of Technology, Harbin 150001
Abstract:To solve the specific problem in KBQA, a question answering system is built based on large scale Chinese knowledge base. This system consists of three main steps: recognition of named entity in question, mapping from question to property in KB, and answering selection. Alias dictionary and LSTM language model are used to recognize named entity contained in question, and two different attention mechanisms are combined with bidirectional LSTM for question-property mapping. Finally, exploit results of first two steps are exploited for entity disambiguation and answering selection. The average F1 value of proposed system in NLPCC-ICCPOL 2016 KBQA task is 0.8106, which is competitive with the best result.
Keywords:knowledge base  question answering  named entity recognition  attention mechanism  
本文献已被 CNKI 万方数据 等数据库收录!
点击此处可从《北京大学学报(自然科学版)》浏览原始摘要信息
点击此处可从《北京大学学报(自然科学版)》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号