ahmed792002 commited on
Commit
24ff46e
·
verified ·
1 Parent(s): 00bb889

ahmed792002/Finetuning_Longformer_IMDb_movie_reviews_Classification

Browse files
README.md CHANGED
@@ -4,9 +4,6 @@ license: apache-2.0
4
  base_model: allenai/longformer-base-4096
5
  tags:
6
  - generated_from_trainer
7
- metrics:
8
- - accuracy
9
- - f1
10
  model-index:
11
  - name: Finetuning_Longformer_IMDb_movie_reviews_Classification
12
  results: []
@@ -18,10 +15,6 @@ should probably proofread and complete it, then remove this comment. -->
18
  # Finetuning_Longformer_IMDb_movie_reviews_Classification
19
 
20
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
21
- It achieves the following results on the evaluation set:
22
- - Loss: 0.4155
23
- - Accuracy: 0.9173
24
- - F1: 0.9173
25
 
26
  ## Model description
27
 
@@ -44,21 +37,17 @@ The following hyperparameters were used during training:
44
  - train_batch_size: 4
45
  - eval_batch_size: 4
46
  - seed: 42
47
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
  - num_epochs: 2
50
 
51
  ### Training results
52
 
53
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
54
- |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
55
- | 0.3594 | 1.0 | 9000 | 0.4529 | 0.897 | 0.8967 |
56
- | 0.2779 | 2.0 | 18000 | 0.4155 | 0.9173 | 0.9173 |
57
 
58
 
59
  ### Framework versions
60
 
61
- - Transformers 4.46.3
62
- - Pytorch 2.4.0
63
- - Datasets 3.1.0
64
- - Tokenizers 0.20.3
 
4
  base_model: allenai/longformer-base-4096
5
  tags:
6
  - generated_from_trainer
 
 
 
7
  model-index:
8
  - name: Finetuning_Longformer_IMDb_movie_reviews_Classification
9
  results: []
 
15
  # Finetuning_Longformer_IMDb_movie_reviews_Classification
16
 
17
  This model is a fine-tuned version of [allenai/longformer-base-4096](https://huggingface.co/allenai/longformer-base-4096) on the None dataset.
 
 
 
 
18
 
19
  ## Model description
20
 
 
37
  - train_batch_size: 4
38
  - eval_batch_size: 4
39
  - seed: 42
40
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
41
  - lr_scheduler_type: linear
42
  - num_epochs: 2
43
 
44
  ### Training results
45
 
 
 
 
 
46
 
47
 
48
  ### Framework versions
49
 
50
+ - Transformers 4.52.4
51
+ - Pytorch 2.6.0+cu124
52
+ - Datasets 3.6.0
53
+ - Tokenizers 0.21.2
config.json CHANGED
@@ -1,5 +1,4 @@
1
  {
2
- "_name_or_path": "allenai/longformer-base-4096",
3
  "architectures": [
4
  "LongformerForSequenceClassification"
5
  ],
@@ -38,7 +37,7 @@
38
  "problem_type": "single_label_classification",
39
  "sep_token_id": 2,
40
  "torch_dtype": "float32",
41
- "transformers_version": "4.46.3",
42
  "type_vocab_size": 1,
43
  "vocab_size": 50265
44
  }
 
1
  {
 
2
  "architectures": [
3
  "LongformerForSequenceClassification"
4
  ],
 
37
  "problem_type": "single_label_classification",
38
  "sep_token_id": 2,
39
  "torch_dtype": "float32",
40
+ "transformers_version": "4.52.4",
41
  "type_vocab_size": 1,
42
  "vocab_size": 50265
43
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:76ef2c7bbe6c531bf291b29900bad8c6f381251b5d69ed8bd23525f69e53c47d
3
  size 594678184
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a739201be26c8632d001b4e8df94f0ecb66c13ba7c764e4cb3d0861a654a5bf
3
  size 594678184
runs/Aug02_08-38-20_c8e36b5c51d3/events.out.tfevents.1754123902.c8e36b5c51d3.19.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ab3f551e12d6abda84aa6007504dbc7d6c62dea586498cfa1e94cddcadcab921
3
+ size 13160
tokenizer_config.json CHANGED
@@ -47,6 +47,7 @@
47
  "cls_token": "<s>",
48
  "eos_token": "</s>",
49
  "errors": "replace",
 
50
  "mask_token": "<mask>",
51
  "model_max_length": 1000000000000000019884624838656,
52
  "pad_token": "<pad>",
 
47
  "cls_token": "<s>",
48
  "eos_token": "</s>",
49
  "errors": "replace",
50
+ "extra_special_tokens": {},
51
  "mask_token": "<mask>",
52
  "model_max_length": 1000000000000000019884624838656,
53
  "pad_token": "<pad>",
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e665ffe94041fb8a003188f68c9688298940b0a0580817a7faafc79677c62b7d
3
  size 5368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d3b43b7bf57ee20607618ed108702b24a18e52fbefc29a01c06fd295f666fae5
3
  size 5368