MELD: Multimodal EmotionLines Dataset

A dataset for Emotion Recognition in Multiparty Conversations

View the Project on GitHub

Download the paper

Multimodal EmotionLines Dataset (MELD) has been created by enhancing and extending EmotionLines dataset. MELD contains the same dialogue instances available in EmotionLines, but it also encompasses audio and visual modality along with text. MELD has more than 1400 dialogues and 13000 utterances from Friends TV series. Multiple speakers participated in the dialogues. Each utterance in a dialogue has been labeled by any of these seven emotions -- Anger, Disgust, Sadness, Joy, Neutral, Surprise and Fear. MELD also has sentiment (positive, negative and neutral) annotation for each utterance.


How to Download the data?

To download the dataset enter either of the following commands:
 wget https://web.eecs.umich.edu/~mihalcea/downloads/MELD.Raw.tar.gz 
wget https://huggingface.co/datasets/declare-lab/MELD/resolve/main/MELD.Raw.tar.gz

Example dialogue

Example dialogue

Dataset statistics

Stat4

Citation

Please cite the following papers if you use this dataset in your work.


S. Poria, D. Hazarika, N. Majumder, G. Naik, R. Mihalcea,
E. Cambria. MELD: A Multimodal Multi-Party Dataset
for Emotion Recognition in Conversation. (2018)
Chen, S.Y., Hsu, C.C., Kuo, C.C. and Ku, L.W.
EmotionLines: An Emotion Corpus of Multi-Party
Conversations. arXiv preprint arXiv:1802.08379 (2018).