Abstract:Currently, most of the music lyrics emotion classification is based on two-label polar emotions, while multi-emotion label classification is rare, and the classification performance obtained is not high for lyrics with uncertain emotionality. To address the limitations of multi-sentiment labeling research in classification and to enhance classification accuracy, this paper introduced a multi-sentiment classification method for music lyrics that employed Word2Vec word embedding technology and employed a multi-core convolutional neural network as the classifier. The method initially integrated music lyrics text for data preprocessing and visualization analysis. Next, Word2Vec word embedding was utilized to extract local features of the lyrics, construct feature sentiment vectors, mine sentiment information within the lyrics, and convert the lyrics into word vectors that were more suitable for input into the classifier model. Finally, a convolutional neural network model was selected as the classifier, and upon this foundation, a novel model was constructed with various heights of convolutional kernels to achieve multi-emotion classification. The experimental results show that the result of music lyrics multi-sentiment classification reaches 94.26%, which improves the classification accuracy by 6.86% compared with the traditional CNN and achieves good performance.