omaressamrme commited on
Commit
6ae04ac
·
verified ·
1 Parent(s): fca5bc4

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +58 -44
README.md CHANGED
@@ -1,58 +1,72 @@
1
  ---
2
- library_name: transformers
3
  license: apache-2.0
4
- base_model: distilbert-base-uncased
5
  tags:
6
- - generated_from_trainer
 
 
 
 
 
 
 
7
  model-index:
8
- - name: tuning
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
- # tuning
16
 
17
- This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
- - eval_loss: 0.7005
20
- - eval_model_preparation_time: 0.0016
21
- - eval_accuracy: 0.31
22
- - eval_f1: 0.0
23
- - eval_runtime: 27.04
24
- - eval_samples_per_second: 18.491
25
- - eval_steps_per_second: 0.592
26
- - step: 0
27
 
28
- ## Model description
29
 
30
- More information needed
31
-
32
- ## Intended uses & limitations
 
 
 
 
 
 
33
 
34
- More information needed
 
 
35
 
36
- ## Training and evaluation data
 
 
 
 
 
37
 
38
- More information needed
 
 
 
 
 
39
 
40
- ## Training procedure
41
-
42
- ### Training hyperparameters
43
-
44
- The following hyperparameters were used during training:
45
- - learning_rate: 2e-05
46
- - train_batch_size: 16
47
- - eval_batch_size: 32
48
- - seed: 42
49
- - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
- - lr_scheduler_type: linear
51
- - num_epochs: 1
52
-
53
- ### Framework versions
54
 
55
- - Transformers 4.57.0
56
- - Pytorch 2.8.0+cpu
57
- - Datasets 4.2.0
58
- - Tokenizers 0.22.1
 
1
  ---
2
+ language: ["en"]
3
  license: apache-2.0
 
4
  tags:
5
+ - text-classification
6
+ - sentiment-analysis
7
+ - imdb
8
+ - distilbert
9
+ pipeline_tag: text-classification
10
+ library_name: transformers
11
+ datasets:
12
+ - imdb
13
  model-index:
14
+ - name: omaressamrme/tuning
15
+ results:
16
+ - task:
17
+ type: text-classification
18
+ name: Sentiment Analysis
19
+ dataset:
20
+ name: IMDb
21
+ type: imdb
22
+ split: test
23
+ metrics:
24
+ - type: accuracy
25
+ value: 0.31
26
+ - type: f1
27
+ value: 0.0
28
+ widget:
29
+ - text: "I absolutely loved this movie!"
30
+ - text: "This was a terrible film. I want my time back."
31
  ---
32
 
 
 
 
 
33
 
34
+ # omaressamrme/tuning
 
 
 
 
 
 
 
 
 
35
 
36
+ Fine-tuned DistilBERT for sentiment analysis on the IMDb dataset.
37
 
38
+ ## Training setup
39
+ - Base model: distilbert-base-uncased
40
+ - Dataset: IMDb (train/test)
41
+ - Epochs: 1
42
+ - Learning rate: 2e-05
43
+ - Train batch size: 16
44
+ - Eval batch size: 32
45
+ - Max train samples: 1000
46
+ - Max eval samples: 500
47
 
48
+ ## Evaluation (test split)
49
+ - Accuracy: 0.31
50
+ - F1 (binary): 0.0
51
 
52
+ ## Usage
53
+ ```python
54
+ from transformers import pipeline
55
+ clf = pipeline("text-classification", model="omaressamrme/tuning")
56
+ print(clf("I absolutely loved this movie!"))
57
+ ```
58
 
59
+ ## Batch inference
60
+ You can batch texts using the pipeline:
61
+ ```python
62
+ texts = ["Great film!", "Worst plot ever."]
63
+ preds = clf(texts)
64
+ ```
65
 
66
+ ## Intended uses & limitations
67
+ - Intended for educational/demo sentiment classification.
68
+ - Trained on a subset of IMDb for speed; performance is lower than full training.
69
+ - May reflect dataset biases; do not use for critical decisions.
 
 
 
 
 
 
 
 
 
 
70
 
71
+ ## Reproducibility
72
+ See the training script in the associated GitHub repo.