File size: 839 Bytes
998913f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
language:
- en
---

Source: CrisisMMD (Alam et al., 2017)

Data Type: Multimodal — each sample includes:

tweet_text (social media text)

tweet_image (corresponding image from the tweet)

Total Samples Used: ~18,802(from the dataset)

Class Labels:

0 → Non-informative

1 → Informative

Collect only values where tweet_text and tweet_image are equal. (thus collected 12,743 tweets and convert it into test and train .pt files)

✅ Preprocessing Done Text:

Tokenized using BERT tokenizer (bert-base-uncased)

Extracted input_ids and attention_mask

Image:

Processed using ResNet-50

Extracted 2048-dimensional feature vectors

Label:

Encoded to 0 or 1 as per class

The final preprocessed dataset was saved as .pt files:

train_info.pt

test_info.pt

Each contains: input_ids, attention_mask, image_vector, and label tensors.