File size: 1,039 Bytes
8afcf0e baf452f 9d13baf 8afcf0e 9d13baf 8afcf0e 9d13baf 8afcf0e baf452f 8afcf0e 9d13baf 8afcf0e 9d13baf 8afcf0e 9d13baf 8afcf0e 9d13baf 8afcf0e 9d13baf 8afcf0e 9d13baf 8afcf0e 9d13baf 8afcf0e 9d13baf 8afcf0e 9d13baf baf452f 9d13baf d2c8b01 baf452f 8afcf0e 9d13baf 8afcf0e d2c8b01 9d13baf | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | ---
base_model: Samuael/amBART_261
tags:
- generated_from_trainer
model-index:
- name: amBART
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# amBART
This model is a fine-tuned version of [Samuael/amBART_261](https://huggingface.co/Samuael/amBART_261) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.02
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|