Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
Source: CrisisMMD (Alam et al., 2017)
|
| 7 |
+
|
| 8 |
+
Data Type: Multimodal — each sample includes:
|
| 9 |
+
|
| 10 |
+
tweet_text (social media text)
|
| 11 |
+
|
| 12 |
+
tweet_image (corresponding image from the tweet)
|
| 13 |
+
|
| 14 |
+
Total Samples Used: ~18,802(from the dataset)
|
| 15 |
+
|
| 16 |
+
Class Labels:
|
| 17 |
+
|
| 18 |
+
0 → Non-informative
|
| 19 |
+
|
| 20 |
+
1 → Informative
|
| 21 |
+
|
| 22 |
+
Collect only values where tweet_text and tweet_image are equal. (thus collected 12,743 tweets and convert it into test and train .pt files)
|
| 23 |
+
|
| 24 |
+
✅ Preprocessing Done Text:
|
| 25 |
+
|
| 26 |
+
Tokenized using BERT tokenizer (bert-base-uncased)
|
| 27 |
+
|
| 28 |
+
Extracted input_ids and attention_mask
|
| 29 |
+
|
| 30 |
+
Image:
|
| 31 |
+
|
| 32 |
+
Processed using ResNet-50
|
| 33 |
+
|
| 34 |
+
Extracted 2048-dimensional feature vectors
|
| 35 |
+
|
| 36 |
+
Label:
|
| 37 |
+
|
| 38 |
+
Encoded to 0 or 1 as per class
|
| 39 |
+
|
| 40 |
+
The final preprocessed dataset was saved as .pt files:
|
| 41 |
+
|
| 42 |
+
train_info.pt
|
| 43 |
+
|
| 44 |
+
test_info.pt
|
| 45 |
+
|
| 46 |
+
Each contains: input_ids, attention_mask, image_vector, and label tensors.
|