Update README.md
Browse files
README.md
CHANGED
|
@@ -15,11 +15,11 @@ datasets:
|
|
| 15 |
- b-mc2/sql-create-context
|
| 16 |
---
|
| 17 |
|
| 18 |
-
## **Model Card for Llama3.
|
| 19 |
|
| 20 |
### **Model Details**
|
| 21 |
#### **Model Description**
|
| 22 |
-
Llama3.
|
| 23 |
|
| 24 |
- **Developed by:** Azzedine (GitHub: Azzedde)
|
| 25 |
- **Funded by [optional]:** N/A
|
|
@@ -32,7 +32,7 @@ Llama3.1-3B-SQL-Expert-1Epoch is a fine-tuned version of Meta’s Llama-3.1-3B,
|
|
| 32 |
---
|
| 33 |
|
| 34 |
### **Model Sources**
|
| 35 |
-
- **Repository:** [Hugging Face](https://huggingface.co/Azzedde/llama3.
|
| 36 |
- **Paper [optional]:** N/A
|
| 37 |
- **Demo [optional]:** N/A
|
| 38 |
|
|
@@ -81,8 +81,8 @@ from unsloth import FastLanguageModel
|
|
| 81 |
from transformers import AutoTokenizer
|
| 82 |
|
| 83 |
# Load tokenizer and model
|
| 84 |
-
tokenizer = AutoTokenizer.from_pretrained("Azzedde/llama3.
|
| 85 |
-
model = FastLanguageModel.from_pretrained("Azzedde/llama3.
|
| 86 |
|
| 87 |
# Example inference
|
| 88 |
sql_prompt = """Below is a SQL database schema and a question. Generate an SQL query to answer the question.
|
|
@@ -164,16 +164,16 @@ print(tokenizer.decode(outputs[0]))
|
|
| 164 |
### **Citation [optional]**
|
| 165 |
#### **BibTeX:**
|
| 166 |
```bibtex
|
| 167 |
-
@article{llama3.
|
| 168 |
author = {Azzedde},
|
| 169 |
-
title = {Llama3.
|
| 170 |
year = {2025},
|
| 171 |
-
url = {https://huggingface.co/Azzedde/llama3.
|
| 172 |
}
|
| 173 |
```
|
| 174 |
|
| 175 |
#### **APA:**
|
| 176 |
-
Azzedde. (2025). *Llama3.
|
| 177 |
|
| 178 |
---
|
| 179 |
|
|
|
|
| 15 |
- b-mc2/sql-create-context
|
| 16 |
---
|
| 17 |
|
| 18 |
+
## **Model Card for Llama3.2-3B-SQL-Expert-1Epoch**
|
| 19 |
|
| 20 |
### **Model Details**
|
| 21 |
#### **Model Description**
|
| 22 |
+
Llama3.2-3B-SQL-Expert-1Epoch is a fine-tuned version of Meta’s Llama-3.1-3B, specifically optimized for generating SQL queries from natural language input. The model has been trained using **Unsloth** for efficient fine-tuning and inference.
|
| 23 |
|
| 24 |
- **Developed by:** Azzedine (GitHub: Azzedde)
|
| 25 |
- **Funded by [optional]:** N/A
|
|
|
|
| 32 |
---
|
| 33 |
|
| 34 |
### **Model Sources**
|
| 35 |
+
- **Repository:** [Hugging Face](https://huggingface.co/Azzedde/llama3.2-3b-sql-expert-1-epoch)
|
| 36 |
- **Paper [optional]:** N/A
|
| 37 |
- **Demo [optional]:** N/A
|
| 38 |
|
|
|
|
| 81 |
from transformers import AutoTokenizer
|
| 82 |
|
| 83 |
# Load tokenizer and model
|
| 84 |
+
tokenizer = AutoTokenizer.from_pretrained("Azzedde/llama3.2-3b-sql-expert-1-epoch")
|
| 85 |
+
model = FastLanguageModel.from_pretrained("Azzedde/llama3.2-3b-sql-expert-1-epoch")
|
| 86 |
|
| 87 |
# Example inference
|
| 88 |
sql_prompt = """Below is a SQL database schema and a question. Generate an SQL query to answer the question.
|
|
|
|
| 164 |
### **Citation [optional]**
|
| 165 |
#### **BibTeX:**
|
| 166 |
```bibtex
|
| 167 |
+
@article{llama3.2-3B-SQL-Expert,
|
| 168 |
author = {Azzedde},
|
| 169 |
+
title = {Llama3.2-3B-SQL-Expert: An SQL Query Generation Model},
|
| 170 |
year = {2025},
|
| 171 |
+
url = {https://huggingface.co/Azzedde/llama3.2-3b-sql-expert-1-epoch}
|
| 172 |
}
|
| 173 |
```
|
| 174 |
|
| 175 |
#### **APA:**
|
| 176 |
+
Azzedde. (2025). *Llama3.2-3B-SQL-Expert: An SQL Query Generation Model.* Retrieved from Hugging Face
|
| 177 |
|
| 178 |
---
|
| 179 |
|