Arthur LAGACHERIE commited on
Commit
0e06c01
·
verified ·
1 Parent(s): 71abe95

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -23
README.md CHANGED
@@ -13,34 +13,16 @@ widget:
13
  license: other
14
  ---
15
 
16
- # Model Trained Using AutoTrain
17
-
18
- This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
19
 
20
  # Usage
21
 
 
22
  ```python
 
 
23
 
24
- from transformers import AutoModelForCausalLM, AutoTokenizer
25
-
26
- model_path = "PATH_TO_THIS_REPO"
27
 
28
- tokenizer = AutoTokenizer.from_pretrained(model_path)
29
- model = AutoModelForCausalLM.from_pretrained(
30
- model_path,
31
- device_map="auto",
32
- torch_dtype='auto'
33
- ).eval()
34
-
35
- # Prompt content: "hi"
36
- messages = [
37
- {"role": "user", "content": "hi"}
38
- ]
39
 
40
- input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
41
- output_ids = model.generate(input_ids.to('cuda'))
42
- response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
43
 
44
- # Model response: "Hello! How can I assist you today?"
45
- print(response)
46
- ```
 
13
  license: other
14
  ---
15
 
 
 
 
16
 
17
  # Usage
18
 
19
+ This model uses the 4-bits quantization. So you need to install bitsandbytes to use it.
20
  ```python
21
+ pip install bitsandbytes
22
+ ```
23
 
 
 
 
24
 
25
+ # Model Trained Using AutoTrain
 
 
 
 
 
 
 
 
 
 
26
 
27
+ This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
 
 
28