Safetensors
Serbian
mistral
datatab commited on
Commit
21712c7
·
verified ·
1 Parent(s): 66b5c28

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +84 -0
README.md CHANGED
@@ -111,6 +111,90 @@ base_model:
111
 
112
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/629dff8f6daf7662067b81d7/juVLei2jy7ZNvXQ1tHRPN.png)
113
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
114
 
115
 
116
  ## 💡 Contributions Welcome!
 
111
 
112
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/629dff8f6daf7662067b81d7/juVLei2jy7ZNvXQ1tHRPN.png)
113
 
114
+ ## 💻 Usage
115
+ ```terminal
116
+ !pip -q install git+https://github.com/huggingface/transformers
117
+
118
+ ```
119
+
120
+ ```python
121
+ from IPython.display import HTML, display
122
+
123
+ def set_css():
124
+ display(HTML('''
125
+ <style>
126
+ pre {
127
+ white-space: pre-wrap;
128
+ }
129
+ </style>
130
+ '''))
131
+ get_ipython().events.register('pre_run_cell', set_css)
132
+
133
+ ```
134
+
135
+ ```python
136
+ import torch
137
+ import transformers
138
+ from transformers import AutoTokenizer, MistralForCausalLM
139
+
140
+ device = "cuda" if torch.cuda.is_available() else "cpu"
141
+
142
+ model = MistralForCausalLM.from_pretrained(
143
+ "datatab/YugoGPT-Florida",
144
+ torch_dtype="auto"
145
+ ).to(device)
146
+
147
+ tokenizer = AutoTokenizer.from_pretrained("datatab/YugoGPT-Florida")
148
+
149
+ ```
150
+
151
+ ```python
152
+ from typing import Optional
153
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
154
+
155
+
156
+ def generate(
157
+ user_content: str, system_content: Optional[str] = ""
158
+ ) -> str:
159
+ system_content = """Ispod se nalazi uputstvo koje definiše zadatak, zajedno sa unosom koji pruža dodatni kontekst.
160
+ Na osnovu ovih informacija, napišite odgovor koji precizno i tačno ispunjava zahtev.
161
+ """
162
+
163
+ messages = [
164
+ {
165
+ "role": "system",
166
+ "content": system_content,
167
+ },
168
+ {"role": "user", "content": user_content},
169
+ ]
170
+
171
+ tokenized_chat = tokenizer.apply_chat_template(
172
+ messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
173
+ ).to("cuda")
174
+
175
+ text_streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
176
+ output = model.generate(
177
+ tokenized_chat,
178
+ streamer=text_streamer,
179
+ max_new_tokens=2048,
180
+ temperature=0.1,
181
+ repetition_penalty=1.11,
182
+ top_p=0.92,
183
+ top_k=1000,
184
+ pad_token_id=tokenizer.pad_token_id,
185
+ eos_token_id=tokenizer.eos_token_id,
186
+ do_sample=True,
187
+ )
188
+
189
+ generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
190
+
191
+ ```
192
+
193
+ ```python
194
+ generate("Nabroj mi sve planete suncevog sistemai reci mi koja je najveca planeta")
195
+
196
+ ```
197
+
198
 
199
 
200
  ## 💡 Contributions Welcome!