Update README.md
Browse files- .gitattributes +1 -0
- Irbis.jpg +3 -0
- README.md +33 -3
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
Irbis.jpg filter=lfs diff=lfs merge=lfs -text
|
Irbis.jpg
ADDED
|
Git LFS Details
|
README.md
CHANGED
|
@@ -1,3 +1,33 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- ru
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
pipeline_tag: text2text-generation
|
| 6 |
+
library_name: transformers
|
| 7 |
+
---
|
| 8 |
+
# Irbis
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
<img src="Irbis.jpg" width="700px">
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
## Description
|
| 15 |
+
|
| 16 |
+
Irbis is a generative language model based on the T5 architecture, designed to generate SQL code from user queries and database schemas. It is specifically trained to work with Russian-language tables and data, enabling it to process and understand both the queries and the structure of the databases in Russian. Irbis is optimized for SQL generation, providing accurate and efficient solutions for database management tasks in a Russian-language environment.
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
## 👨💻 Examples of usage
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
```python
|
| 23 |
+
from transformers import pipeline
|
| 24 |
+
import torch
|
| 25 |
+
|
| 26 |
+
pipe = pipeline("text2text-generation", model = "MadShift/Irbis", max_lenght=250)
|
| 27 |
+
|
| 28 |
+
input_text = "Схема: [схема бд], Вопрос: [введите ваш вопрос по таблице здесь]"
|
| 29 |
+
input_ids = torch.tensor([pipe.tokenizer.encode(input_text)])
|
| 30 |
+
outputs = pipe.model.generate(input_ids, eos_token_id=pipe.tokenizer.eos_token_id, early_stopping=True)
|
| 31 |
+
|
| 32 |
+
print(pipe.tokenizer.decode(outputs[0][1:]))
|
| 33 |
+
```
|