| | --- |
| | license: apache-2.0 |
| | library_name: transformers |
| | base_model: mistralai/Mistral-7B-v0.1 |
| | datasets: |
| | - b-mc2/sql-create-context |
| | model-index: |
| | - name: mistral-7b-text-to-sql_full-model |
| | results: [] |
| | reference: |
| | - https://www.philschmid.de/fine-tune-llms-in-2024-with-trl |
| | language: |
| | - en |
| | pipeline_tag: text2text-generation |
| | --- |
| | |
| | <!-- This model card has been generated automatically according to the information the Trainer had access to. You |
| | should probably proofread and complete it, then remove this comment. --> |
| |
|
| | # mistral-7b-text-to-sql_full-model |
| | |
| | - This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the b-mc2/sql-create-context dataset. |
| | - These are the full model weights (merged with adapter weights), and the code to use these for generation is given below. |
| | - Primary reference: https://www.philschmid.de/fine-tune-llms-in-2024-with-trl |
| | |
| | ## Model description |
| | |
| | - Model type: Language model |
| | - Language(s) (NLP): English |
| | - License: Apache 2.0 |
| | - Finetuned from model : Mistral-7B-v0.1 |
| | |
| | ## How to get started with the model |
| | |
| | ```python |
| | import torch |
| | |
| | from datasets import load_dataset |
| | from transformers import AutoTokenizer, AutoModelForCausalLM |
| |
|
| | # Load model directly |
| |
|
| | tokenizer = AutoTokenizer.from_pretrained("delayedkarma/mistral-7b-text-to-sql_full-model") |
| | model = AutoModelForCausalLM.from_pretrained("delayedkarma/mistral-7b-text-to-sql_full-model") |
| |
|
| | text = "How many matched scored 3–6, 7–6(5), 6–3?" |
| | inputs = tokenizer(text, return_tensors="pt") |
| | |
| | outputs = model.generate(**inputs, max_new_tokens=40) |
| | print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
| | ``` |
| | |
| | ## Training procedure |
| | |
| | ### Training hyperparameters |
| | |
| | The following hyperparameters were used during training: |
| | - learning_rate: 0.0002 |
| | - train_batch_size: 3 |
| | - eval_batch_size: 8 |
| | - seed: 42 |
| | - gradient_accumulation_steps: 2 |
| | - total_train_batch_size: 6 |
| | - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
| | - lr_scheduler_type: constant |
| | - lr_scheduler_warmup_ratio: 0.03 |
| | - num_epochs: 3 |
| | |
| | ### Framework versions |
| | |
| | - PEFT 0.7.2.dev0 |
| | - Transformers 4.36.2 |
| | - Pytorch 2.2.2 |
| | - Datasets 2.16.1 |
| | - Tokenizers 0.15.2 |