kevinkyi commited on
Commit
43bd77b
·
verified ·
1 Parent(s): 8b48fb4

Add model card

Browse files
Files changed (1) hide show
  1. README.md +93 -66
README.md CHANGED
@@ -1,71 +1,98 @@
1
  ---
2
  library_name: transformers
3
- license: apache-2.0
4
- base_model: distilbert-base-uncased
5
  tags:
6
- - generated_from_trainer
7
- metrics:
8
- - accuracy
9
- - precision
10
- - recall
11
- - f1
12
- model-index:
13
- - name: Homework2_Finetuning
14
- results: []
15
  ---
16
 
17
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
- should probably proofread and complete it, then remove this comment. -->
19
-
20
- # Homework2_Finetuning
21
-
22
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
23
- It achieves the following results on the evaluation set:
24
- - Loss: 0.0030
25
- - Accuracy: 1.0
26
- - Precision: 1.0
27
- - Recall: 1.0
28
- - F1: 1.0
29
-
30
- ## Model description
31
-
32
- More information needed
33
-
34
- ## Intended uses & limitations
35
-
36
- More information needed
37
-
38
- ## Training and evaluation data
39
-
40
- More information needed
41
-
42
- ## Training procedure
43
-
44
- ### Training hyperparameters
45
-
46
- The following hyperparameters were used during training:
47
- - learning_rate: 3e-05
48
- - train_batch_size: 16
49
- - eval_batch_size: 16
50
- - seed: 42
51
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
- - lr_scheduler_type: linear
53
- - lr_scheduler_warmup_ratio: 0.1
54
- - num_epochs: 5
55
-
56
- ### Training results
57
-
58
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
59
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
60
- | 0.1158 | 1.0 | 55 | 0.0315 | 0.9909 | 0.9821 | 1.0 | 0.9910 |
61
- | 0.0043 | 2.0 | 110 | 0.0083 | 1.0 | 1.0 | 1.0 | 1.0 |
62
- | 0.002 | 3.0 | 165 | 0.0017 | 1.0 | 1.0 | 1.0 | 1.0 |
63
- | 0.0014 | 4.0 | 220 | 0.0012 | 1.0 | 1.0 | 1.0 | 1.0 |
64
-
65
-
66
- ### Framework versions
67
-
68
- - Transformers 4.45.2
69
- - Pytorch 2.8.0+cu126
70
- - Datasets 2.21.0
71
- - Tokenizers 0.20.3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ pipeline_tag: text-classification
4
+ license: mit
5
  tags:
6
+ - distilbert
7
+ - sentiment
8
+ - football
9
+ - fine-tuning
10
+ model_name: DistilBERT Football Sentiment (Positive vs Negative)
11
+ language:
12
+ - en
 
 
13
  ---
14
 
15
+ # DistilBERT Football Sentiment Positive vs Negative
16
+
17
+ ## Purpose
18
+ Fine-tune a compact transformer (DistilBERT) to classify short football-related comments as **positive (1)** or **negative (0)**. This supports a course assignment on text modeling and evaluation.
19
+
20
+ ## Dataset
21
+ - **Source:** `james-kramer/football_news` on Hugging Face.
22
+ - **Schema:** `text` (string), `label` (0/1).
23
+ - **Task:** Binary sentiment classification (`0=negative`, `1=positive`).
24
+ - **Splits:** Stratified **80/10/10** (train/val/test) created in this notebook.
25
+ - **Cleaning:** Strip text, drop empty/NA rows.
26
+
27
+ ## Preprocessing
28
+ - **Tokenizer:** `distilbert-base-uncased` (uncased), `max_length=256`, truncation.
29
+ - **Label mapping:** `{0: "negative", 1: "positive"}`.
30
+
31
+ ## Training Setup
32
+ - **Base model:** `distilbert-base-uncased`
33
+ - **Epochs:** 5
34
+ - **Batch size:** 16
35
+ - **Learning rate:** 3e-05
36
+ - **Weight decay:** 0.01
37
+ - **Warmup ratio:** 0.1
38
+ - **Early stopping:** patience = 2 (monitor F1 on validation)
39
+ - **Seed:** 42
40
+ - **Hardware:** Google Colab (GPU)
41
+
42
+ ## Metrics (Held-out Test)
43
+ ```json
44
+ {
45
+ "eval_loss": 0.0029852271545678377,
46
+ "eval_accuracy": 1.0,
47
+ "eval_precision": 1.0,
48
+ "eval_recall": 1.0,
49
+ "eval_f1": 1.0,
50
+ "eval_runtime": 0.3123,
51
+ "eval_samples_per_second": 352.273,
52
+ "eval_steps_per_second": 22.417,
53
+ "epoch": 4.0
54
+ }
55
+ ```
56
+
57
+ ## Confusion Matrix & Errors
58
+ The Colab notebook includes a confusion matrix for validation and test, plus a short error analysis with example misclassifications and hypotheses (e.g., injury news phrased neutrally but labeled negative).
59
+ | | Pred 0 | Pred 1 |
60
+ |-----------|-------:|-------:|
61
+ | **True 0**| 55 | 0 |
62
+ | **True 1**| 0 | 55 |
63
+
64
+ ## Brief Error Analysis (Concrete Examples & Hypotheses)
65
+ Below are several misclassified examples with likely causes and fixes:
66
+
67
+ 1. **Text:** "<paste misclassified sentence #1>"
68
+ **True:** 0 (negative) • **Pred:** 1 (positive)
69
+ **Why it failed (hypothesis):** "<e.g., neutral phrasing with positive words outweighed injury cue>"
70
+ **Potential fix:** "<e.g., add more injury/neutral-negative examples; reweight class; augment with negation patterns>"
71
+
72
+ 2. **Text:** "<paste misclassified sentence #2>"
73
+ **True:** 1 (positive) • **Pred:** 0 (negative)
74
+ **Why it failed (hypothesis):** "<e.g., sarcasm or mixed sentiment>"
75
+ **Potential fix:** "<e.g., include sarcastic examples; leverage larger model or polarity lexicon features>"
76
+
77
+ 3. **Text:** "<paste misclassified sentence #3>"
78
+ **True:** <0/1> • **Pred:** <1/0>
79
+ **Why it failed (hypothesis):** "<e.g., domain shift, team/league slang>"
80
+ **Potential fix:** "<e.g., add domain-specific samples; modest LR warmup or longer training>"
81
+
82
+ ## Limitations & Ethics
83
+ - Dataset size and labeling style can lead to unstable metrics; neutral/ambiguous tone is hard.
84
+ - Sports injury and team-management news may bias wording and labels.
85
+ - For coursework only; not for production or sensitive decisions.
86
+
87
+ ## Reproducibility
88
+ - Python: 3.12
89
+ - Transformers: >=4.41
90
+ - Datasets: >=2.19
91
+ - Seed: 42
92
+
93
+ ## License
94
+ - Code & weights: MIT (adjust per course guidelines)
95
+ - Dataset: see the original dataset's license/terms
96
+
97
+ ## AI Assistance Disclosure
98
+ - GenAI tools assisted with notebook structure and documentation; modeling choices and evaluation were implemented and verified by the author.