THU-IAR commited on
Commit
c507f74
·
verified ·
1 Parent(s): 111710b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +39 -3
README.md CHANGED
@@ -1,3 +1,39 @@
1
- ---
2
- license: cc-by-nc-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ language:
4
+ - en
5
+ pretty_name: Multimodal Intent Recognition (MIntRec)
6
+ ---
7
+
8
+ ## Dataset details
9
+ In real-world conversational interactions, we usually combine information from multiple modalities (e.g., text, video, audio) to help analyze human intentions. Though intent analysis has been widely explored in the Natural Language Processing community, there is a scarcity of data for multimodal intent analysis. Thus, we provide a novel multimodal intent benchmark dataset, MIntRec, to boom the research. To the best of our knowledge, it is the first multimodal intent dataset from real-world conversational scenarios.
10
+
11
+ ## Dataset Construction:
12
+ ## a. Data sources
13
+ We collect raw data from the Superstore TV series. The reasons are that it contains (1) a wealth of characters (including seven prominent and twenty recurring roles) with different identities in the superstore and (2) a mass of stories in various scenes (e.g., shopping mall, warehouse, office).
14
+ ## b. Intent taxonomies
15
+ In this work, we design new hierarchical intent taxonomies for multimodal scenes. Inspired by human intention philosophy and goal-oriented intentions in artificial intelligence research, we categorize two coarse-grained intent categories: "Express emotions or attitudes" and "Achieve goals".
16
+
17
+ We further categorize the two coarse-grained intent classes into 20 fine-grained classes by analyzing as many video segments and summarizing high-frequency intent tags. They are as follows:
18
+ ## Express emotions and attitudes:
19
+ Complain, Praise, Apologize, Thank, Criticize, Care, Agree, Taunt, Flaunt, Oppose, Joke
20
+ ## Achieve goals:
21
+ Inform, Advise, Arrange, Introduce, Comfort, Leave, Prevent, Greet, Ask for help.
22
+
23
+ ## c. Multimodal Intent Annotation
24
+ Five annotators label the full dataset independently. They need to combine text, video, and audio information and determine one intent label with the most confidence. The qualified samples are saved with votes no less than three notes among the twenty fine-grained intent classes.
25
+
26
+ **Dataset Distribution**
27
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b646c00eda977859dcb1af/ZLA9PVr451khlfgRXlqPB.png)
28
+
29
+ **Dataset Statistics**
30
+
31
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64b646c00eda977859dcb1af/3UdMTPjGnL-TEyyLaiyNP.png)
32
+
33
+
34
+ **Where to send questions or comments about the model:**
35
+ https://github.com/thuiar/MIntRec/issues
36
+
37
+ ## Intended use
38
+ **Primary intended uses:**
39
+ The primary use of MIntRec is to combine information from multiple modalities (e.g., text, video, audio) to help analyze human intent.