Ksgk-fy commited on
Commit
ed372bf
·
verified ·
1 Parent(s): c4d11a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -5
README.md CHANGED
@@ -14,17 +14,16 @@ LoReFT to be tested nextly.
14
 
15
  <!-- Provide a longer summary of what this model is. -->
16
 
17
- '''yaml
18
-
19
  - **Developed by:** Fangyuan Yu
20
  - **Funded by Hardeep:** [Temus]
21
  - **Language(s) (NLP):** Thai, English
22
  - **License:** MIT
23
  - **Finetuned from model [optional]:** [Typhoon-7B]
24
- '''
25
 
26
  ## Uses
27
- '''python
28
  from peft import PeftModel, PeftConfig
29
  from transformers import AutoModelForCausalLM
30
 
@@ -37,7 +36,7 @@ messages = [{"role": "user", "content": "สวัสดีครับ/ค่
37
  prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
38
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
39
  print(outputs[0]["generated_text"])
40
- '''
41
 
42
 
43
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
14
 
15
  <!-- Provide a longer summary of what this model is. -->
16
 
17
+ ```yaml
 
18
  - **Developed by:** Fangyuan Yu
19
  - **Funded by Hardeep:** [Temus]
20
  - **Language(s) (NLP):** Thai, English
21
  - **License:** MIT
22
  - **Finetuned from model [optional]:** [Typhoon-7B]
23
+ ```
24
 
25
  ## Uses
26
+ ```python
27
  from peft import PeftModel, PeftConfig
28
  from transformers import AutoModelForCausalLM
29
 
 
36
  prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
37
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
38
  print(outputs[0]["generated_text"])
39
+ ```
40
 
41
 
42
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->