Update README.md
Browse files
README.md
CHANGED
|
@@ -101,14 +101,14 @@ print(tokenizer.decode(outputs[0]),skip_special_tokens=True)
|
|
| 101 |
|
| 102 |
### Training Data
|
| 103 |
|
| 104 |
-
|
| 105 |
|
| 106 |
### Training Procedure
|
| 107 |
|
| 108 |
The model was fine-tuned using the Unsloth and LoRA.
|
| 109 |
|
| 110 |
-
|
| 111 |
-
|
| 112 |
|
| 113 |
#### Training Hyperparameters
|
| 114 |
|
|
@@ -124,8 +124,8 @@ The model was fine-tuned using the Unsloth and LoRA.
|
|
| 124 |
|
| 125 |
#### Speeds, Sizes, Times [optional]
|
| 126 |
|
| 127 |
-
-
|
| 128 |
-
-
|
| 129 |
|
| 130 |
### Results
|
| 131 |
|
|
|
|
| 101 |
|
| 102 |
### Training Data
|
| 103 |
|
| 104 |
+
- **Dataset**: gretelai/synthetic_text_to_sql which consists of 100,000 synthetic examples of natural language questions paired with corresponding SQL queries and explanations.
|
| 105 |
|
| 106 |
### Training Procedure
|
| 107 |
|
| 108 |
The model was fine-tuned using the Unsloth and LoRA.
|
| 109 |
|
| 110 |
+
- LoRA rank: 8
|
| 111 |
+
- Aplha: 16
|
| 112 |
|
| 113 |
#### Training Hyperparameters
|
| 114 |
|
|
|
|
| 124 |
|
| 125 |
#### Speeds, Sizes, Times [optional]
|
| 126 |
|
| 127 |
+
- Training time: 8 hour
|
| 128 |
+
- Speed: 0.22
|
| 129 |
|
| 130 |
### Results
|
| 131 |
|