首页 | 本学科首页   官方微博 | 高级检索  
     

基于自监督对比学习与方面级情感分析的联合微调模型
引用本文:狄广义,陈见飞,杨世军,高军,王耀坤,余本功. 基于自监督对比学习与方面级情感分析的联合微调模型[J]. 科学技术与工程, 2024, 24(21): 9033-9042
作者姓名:狄广义  陈见飞  杨世军  高军  王耀坤  余本功
作者单位:国能数智科技开发(北京)有限公司;合肥工业大学管理学院
基金项目:国家自然科学(项目编号:71671057);
摘    要:方面级情感分析是自然语言处理领域中一项具有挑战性的细粒度情感分析任务。以微调预训练语言模型的方式广泛应用于方面级情感分析任务,并取得了明显的效果提升。然而,现有多数研究设计的下游结构较为复杂,甚至与预训练模型部分隐藏层结构重合,从而限制了整体模型性能。由于对比学习方法有助于改善预训练语言模型在词语级别和句子级别的表示,本文设计了一种结合自监督对比学习与方面级情感分析的联合微调模型(SSCL-ABSA)。该模型以简洁的下游结构联合两种学习任务,实现从不同角度微调预训练BERT模型,有效促进了方面级情感分析效果的提升。具体地,首先在BERT编码阶段,将评论文本与方面词拼接成两个片段输入BERT编码器,得到各词特征表示。之后根据下游结构需求,对不同的词特征采用池化操作。一方面池化所有词特征用于方面级情感分析,另一方面池化两个片段的方面词特征用于自监督对比学习。最终结合两种任务以联合学习的方式微调BERT编码器。在三个公开数据集上进行实验评估,结果表明SSCL-ABSA方法优于其他同类对比方法。借助t-SNE方法,形象地可视化了SSCL-ABSA有效改善了BERT模型的实体表示效果。

关 键 词:方面级情感分析   自监督对比学习   预训练语言模型   BERT编码器   联合微调
收稿时间:2023-07-29
修稿时间:2024-05-07

A Self-Supervised Contrastive Learning Framework for Aspect based Sentiment Analysis Task
Di Guangyi,Chen Jianfei,Yang Shijun,Gao Jun,Wang Yaokun,Yu Bengong. A Self-Supervised Contrastive Learning Framework for Aspect based Sentiment Analysis Task[J]. Science Technology and Engineering, 2024, 24(21): 9033-9042
Authors:Di Guangyi  Chen Jianfei  Yang Shijun  Gao Jun  Wang Yaokun  Yu Bengong
Affiliation:Guoneng Digital Intelligence Technology Development Co.,Ltd.
Abstract:The way of fine-tuning Pre-Trained Models to complete Aspect-based Sentiment Analysis tasks has been widely used and has achieved significant improvement. However, most of the existing studies use complex downstream structures, and even coincide with some hidden layer structures of Pre-Trained Models, which limits the overall model performance. Since the contrastive learning helps to improve the representation of Pre-Trained Models at the word level and sentence level, a joint fine-tuning framework (SSCL-ABSA) combining self-supervised contrastive learning and aspect level sentiment analysis is designed. The framework combines two learning tasks with a concise downstream structure to fine-tune the Pre-Trained BERT model from different angles, which effectively promotes the improvement of the effect of aspect-level sentiment analysis. Specifically, two segments of text and aspect words are spliced and entered into the BERT encoder as samples. After encoding, pooling operations are adopted for the different word representations according to the downstream structure requirements. On the one hand, pooling all word representations is used for aspect-level sentiment analysis, and on the other hand, pooling of aspect word representations of two segments is used for self-supervised comparative learning. Finally, the two tasks are combined to fine-tune the BERT encoder in a joint learning manner. Experimental evaluation on three publicly available datasets shows that the SSCL-ABSA method is superior to other similar comparison methods. With the help of the t-SNE method, SSCL-ABSA is visualized to effectively improve the entity representation effect of the BERT model.
Keywords:Aspect-based Sentiment Analysis   self-supervised contrastive learning   Pre-Trained Model   BERT encoder   joint learning
点击此处可从《科学技术与工程》浏览原始摘要信息
点击此处可从《科学技术与工程》下载全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号