首页 | 本学科首页   官方微博 | 高级检索  
     

结合语义分割图的注意力机制文本生成图像
引用本文:梁成名,李云红,李丽敏,苏雪平,朱绵云,朱耀麟. 结合语义分割图的注意力机制文本生成图像[J]. 空军工程大学学报(自然科学版), 2024, 25(4): 118-127
作者姓名:梁成名  李云红  李丽敏  苏雪平  朱绵云  朱耀麟
作者单位:西安工程大学电子信息学院,西安,710048
基金项目:国家自然科学基金(62203344);陕西省自然科学基础研究重点项目(2022JZ-35);陕西高校青年创新团队项目
摘    要:针对生成对抗网络生成图像存在结构不完整、内容不真实、质量差的问题,提出一种结合语义分割图的注意力机制文本到图像生成模型(SSA-GAN)。首先采用一种简单有效的深度融合模块,以全局句子向量作为输入条件,在生成图像的同时,充分融合文本信息。其次结合语义分割图像,提取其边缘轮廓特征,为模型提供额外的生成和约束条件。然后采用注意力机制为模型提供细粒度词级信息,丰富所生成图像的细节。最后使用多模态相似度计算模型计算细粒度的图像-文本匹配损失,更好地训练生成器。通过CUB-200和Oxford-102 Flowers数据集测试并验证模型,结果表明:所提模型(SSA-GAN)与StackGAN、AttnGAN、DF-GAN以及RAT-GAN等模型最终生成的图像质量相比,IS指标值最高分别提升了13.7%和43.2%,FID指标值最高分别降低了34.7%和74.9%,且具有更好的可视化效果,证明了所提方法的有效性。

关 键 词:文本生成图像;语义分割图像;生成对抗网络;注意力机制;仿射变换

Combination with Attention Mechanism Text Generation Images
LIANG Chengming,LI Yunhong,LI Limin,SU Xueping,ZHU Mianyun,ZHU Yaolin. Combination with Attention Mechanism Text Generation Images[J]. Journal of Air Force Engineering University(Natural Science Edition), 2024, 25(4): 118-127
Authors:LIANG Chengming  LI Yunhong  LI Limin  SU Xueping  ZHU Mianyun  ZHU Yaolin
Affiliation:School of Electronics and Information, Xi’an Polytechnic University, Xi’an 710048,China
Abstract:Aimed at the problems that generative adversarial network is incomplete in structure, unreal in content and poor in quality of images generated, an attention mechanism text-to-image generation model combined with semantic segmentation graph (SSA-GAN) is proposed. First, taking global sentence vectors as input conditions, a simple and effective deep fusion module is utilized for fully fusing text information while generating images are generating simultaneously. Second, the semantically segmented images are combined to extract their edge profile features to provide additional generative and constraint conditions for the model, and the attention mechanism is used to provide fine-grained word-level information for the model to enrich the details of the generated images. Finally, a multimodal similarity computation model is used to compute fine-grained image-text matching loss to further train the generator. The model is tested and validated by CUB-200 and Oxford-102 Flowers datasets, and the results show that the proposed model (SSA-GAN) improves the quality of the final generated images. Compared to the models such as StackGAN, AttnGAN, DF-GAN, and RAT-GAN, the IS increases in metrics values by 13.7% and 43.2%, respectively. And the FID in metric values is reduced to 34.7% and 74.9%, respectively.
Keywords:text generates images  semantic segmentation image  attention mechanism  generate adversarial network   affine transformation
点击此处可从《空军工程大学学报(自然科学版)》浏览原始摘要信息
点击此处可从《空军工程大学学报(自然科学版)》下载免费的PDF全文
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号