MadShift commited on
Commit
74c9bf5
·
verified ·
1 Parent(s): 4dcbbe5

Update README.md

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. Irbis.jpg +3 -0
  3. README.md +33 -3
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Irbis.jpg filter=lfs diff=lfs merge=lfs -text
Irbis.jpg ADDED

Git LFS Details

  • SHA256: ef68e413e275a9f14800e1aa1b899fd2de1882d68e9c9b8796f62faeb91e613f
  • Pointer size: 132 Bytes
  • Size of remote file: 1.05 MB
README.md CHANGED
@@ -1,3 +1,33 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ru
4
+ license: apache-2.0
5
+ pipeline_tag: text2text-generation
6
+ library_name: transformers
7
+ ---
8
+ # Irbis
9
+
10
+
11
+ <img src="Irbis.jpg" width="700px">
12
+
13
+
14
+ ## Description
15
+
16
+ Irbis is a generative language model based on the T5 architecture, designed to generate SQL code from user queries and database schemas. It is specifically trained to work with Russian-language tables and data, enabling it to process and understand both the queries and the structure of the databases in Russian. Irbis is optimized for SQL generation, providing accurate and efficient solutions for database management tasks in a Russian-language environment.
17
+
18
+
19
+ ## 👨‍💻 Examples of usage
20
+
21
+
22
+ ```python
23
+ from transformers import pipeline
24
+ import torch
25
+
26
+ pipe = pipeline("text2text-generation", model = "MadShift/Irbis", max_lenght=250)
27
+
28
+ input_text = "Схема: [схема бд], Вопрос: [введите ваш вопрос по таблице здесь]"
29
+ input_ids = torch.tensor([pipe.tokenizer.encode(input_text)])
30
+ outputs = pipe.model.generate(input_ids, eos_token_id=pipe.tokenizer.eos_token_id, early_stopping=True)
31
+
32
+ print(pipe.tokenizer.decode(outputs[0][1:]))
33
+ ```