File size: 1,287 Bytes
4033f39
 
214dc60
 
 
 
 
 
4033f39
214dc60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---

license: apache-2.0
datasets:
- wangrui6/Zhihu-KOL
language:
- zh
base_model:
- lucky2me/Dorami
---


# Dorami-Instruct

    Dorami-Instruct is a Supervised Fine-tuning(SFT) model based on the pretrained model lucky2me/Dorami


## Model description

### Training data

-    [wangrui6/Zhihu-KOL](https://huggingface.co/datasets/wangrui6/Zhihu-KOL)

### Training code

-    [dorami](https://github.com/6zeus/dorami.git)

## How to use

### 1. Download model from Hugging Face Hub to local

``` 

git lfs install

git clone https://huggingface.co/lucky2me/Dorami-Instruct

```

### 2. Use the model downloaded above 
```python

from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig

model_path = "The path of the model downloaded above"

tokenizer = AutoTokenizer.from_pretrained(model_path)

model = AutoModelForCausalLM.from_pretrained(model_path)

prompt="fill in any prompt you like."

inputs = tokenizer(prompt, return_tensors="pt")

generation_config = GenerationConfig(max_new_tokens=64, do_sample=True, top_k=2, eos_token_id=model.config.eos_token_id)

outputs = model.generate(**inputs, generation_config=generation_config)

decoded_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)

print(decoded_text)

```