keonju commited on
Commit
d4d30d2
·
1 Parent(s): 53cce06

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +49 -1
README.md CHANGED
@@ -4,4 +4,52 @@ language:
4
  - ko
5
  tags:
6
  - conversational
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - ko
5
  tags:
6
  - conversational
7
+ ---
8
+ ### How to use
9
+
10
+ Now we are ready to try out how the model works as a chatting partner!
11
+
12
+ ```python
13
+ from transformers import AutoModelForCausalLM, AutoTokenizer
14
+ import torch
15
+
16
+
17
+ tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
18
+ model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")
19
+
20
+ # Let's chat for 5 lines
21
+ for step in range(5):
22
+ message = input("MESSAGE: ")
23
+
24
+ if message in ["", "q"]: # if the user doesn't wanna talk
25
+ break
26
+
27
+ # encode the new user input, add the eos_token and return a tensor in Pytorch
28
+ new_user_input_ids = tokenizer.encode(message + tokenizer.eos_token, return_tensors='pt')
29
+
30
+ # append the new user input tokens to the chat history
31
+ bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
32
+
33
+
34
+ # generated a response while limiting the total chat history to 1000 tokens,
35
+ if (trained):
36
+ chat_history_ids = model.generate(
37
+ bot_input_ids,
38
+ max_length=1000,
39
+ pad_token_id=tokenizer.eos_token_id,
40
+ no_repeat_ngram_size=3,
41
+ do_sample=True,
42
+ top_k=100,
43
+ top_p=0.7,
44
+ temperature = 0.8,
45
+ )
46
+ else:
47
+ chat_history_ids = model.generate(
48
+ bot_input_ids,
49
+ max_length=1000,
50
+ pad_token_id=tokenizer.eos_token_id,
51
+ no_repeat_ngram_size=3
52
+ )
53
+
54
+ # pretty print last ouput tokens from bot
55
+ print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))