|
|
--- |
|
|
language: |
|
|
- en |
|
|
--- |
|
|
|
|
|
Source: CrisisMMD (Alam et al., 2017) |
|
|
|
|
|
Data Type: Multimodal — each sample includes: |
|
|
|
|
|
tweet_text (social media text) |
|
|
|
|
|
tweet_image (corresponding image from the tweet) |
|
|
|
|
|
Total Samples Used: ~18,802(from the dataset) |
|
|
|
|
|
Class Labels: |
|
|
|
|
|
0 → Non-informative |
|
|
|
|
|
1 → Informative |
|
|
|
|
|
Collect only values where tweet_text and tweet_image are equal. (thus collected 12,743 tweets and convert it into test and train .pt files) |
|
|
|
|
|
✅ Preprocessing Done Text: |
|
|
|
|
|
Tokenized using BERT tokenizer (bert-base-uncased) |
|
|
|
|
|
Extracted input_ids and attention_mask |
|
|
|
|
|
Image: |
|
|
|
|
|
Processed using ResNet-50 |
|
|
|
|
|
Extracted 2048-dimensional feature vectors |
|
|
|
|
|
Label: |
|
|
|
|
|
Encoded to 0 or 1 as per class |
|
|
|
|
|
The final preprocessed dataset was saved as .pt files: |
|
|
|
|
|
train_info.pt |
|
|
|
|
|
test_info.pt |
|
|
|
|
|
Each contains: input_ids, attention_mask, image_vector, and label tensors. |