Data-efficient playlist captioning with musical and linguistic knowledge
Association for Computational Linguistics
Music streaming services feature billions of playlists created by users, professional editors or algorithms. In this content overload scenario, it is crucial to characterise playlists, so that music can be effectively organised and accessed. Playlist titles and descriptions are proposed in natural language either manually by music editors and users or automatically from pre-defined templates. However, the former is time-consuming while the latter is limited by the vocabulary and covered music themes. In this work, we propose PLAYNTELL, a data-efficient multi-modal encoder-decoder model for automatic playlist captioning. Compared to existing music captioning algorithms, PLAYN TELL leverages also linguistic and musical knowledge to generate correct and thematic captions. We benchmark PLAYNTELL on a new editorial playlists dataset collected from two major music streaming services. PLAYNTELL yields 2x-3x higher BLEU@4 and CIDEr than state of the art captioning algorithms
Playlists , Music streaming , Playlist captioning , Recommender systems , Captioning algorithms
Gabbolini, G., Hennequin, R. and Epure, E. (2022) ‘Data-Efficient Playlist Captioning With Musical and Linguistic Knowledge’, EMNLP 2022, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Abu Dhabi, UAE, 7-11 Dec., Association for Computational Linguistics, pp. 11401-11415.