| | --- |
| | license: bigcode-openrail-m |
| | base_model: bigcode/santacoder |
| | tags: |
| | - generated_from_trainer |
| | model-index: |
| | - name: santacoder-finetuned-robot2 |
| | results: [] |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # santacoder-finetuned-robot2 |
| |
|
| | This model is a fine-tuned version of [bigcode/santacoder](https://huggingface.co/bigcode/santacoder) on the dataset [datas.csv](./datas.csv) (généré par gpt3.5-turbo à partir de quelqes exemples). |
| | It achieves the following results on the evaluation set: |
| | - Loss: 0.6283 |
| |
|
| | ## Model description |
| |
|
| | More information needed |
| |
|
| | ## Intended uses & limitations |
| |
|
| | Ce modèle permet de commander un robot à partir d'instruction en langage naturel. |
| |
|
| | ## Training and evaluation data |
| |
|
| | More information needed |
| |
|
| | ## Training procedure |
| |
|
| | ### Training hyperparameters |
| |
|
| | The following hyperparameters were used during training: |
| | - learning_rate: 5e-05 |
| | - train_batch_size: 1 |
| | - eval_batch_size: 1 |
| | - seed: 42 |
| | - gradient_accumulation_steps: 4 |
| | - total_train_batch_size: 4 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: cosine |
| | - lr_scheduler_warmup_steps: 1 |
| | - training_steps: 20 |
| |
|
| | ### Training results |
| |
|
| | | Training Loss | Epoch | Step | Validation Loss | |
| | |:-------------:|:-----:|:----:|:---------------:| |
| | | No log | 0.05 | 1 | 1.5944 | |
| | | No log | 0.1 | 2 | 2.2587 | |
| | | No log | 0.15 | 3 | 1.3593 | |
| | | No log | 0.2 | 4 | 1.6304 | |
| | | No log | 0.25 | 5 | 1.3971 | |
| | | No log | 0.3 | 6 | 1.2113 | |
| | | No log | 0.35 | 7 | 0.8876 | |
| | | No log | 0.4 | 8 | 0.9664 | |
| | | No log | 0.45 | 9 | 0.8842 | |
| | | 1.4437 | 0.5 | 10 | 0.7931 | |
| | | 1.4437 | 0.55 | 11 | 0.7410 | |
| | | 1.4437 | 0.6 | 12 | 0.7020 | |
| | | 1.4437 | 0.65 | 13 | 0.6665 | |
| | | 1.4437 | 0.7 | 14 | 0.6705 | |
| | | 1.4437 | 0.75 | 15 | 0.6589 | |
| | | 1.4437 | 0.8 | 16 | 0.6395 | |
| | | 1.4437 | 0.85 | 17 | 0.6358 | |
| | | 1.4437 | 0.9 | 18 | 0.6324 | |
| | | 1.4437 | 0.95 | 19 | 0.6286 | |
| | | 0.5726 | 1.0 | 20 | 0.6283 | |
| |
|
| |
|
| | ### Framework versions |
| |
|
| | - Transformers 4.33.2 |
| | - Pytorch 2.0.1+cu118 |
| | - Datasets 2.14.5 |
| | - Tokenizers 0.13.3 |
| |
|