Migrate model card from transformers-repo
Browse filesRead announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/cedpsam/chatbot_fr/README.md
README.md
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: fr
|
| 3 |
+
tags:
|
| 4 |
+
- conversational
|
| 5 |
+
widget:
|
| 6 |
+
- text: "bonjour."
|
| 7 |
+
- text: "mais encore"
|
| 8 |
+
- text: "est ce que l'argent achete le bonheur?"
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
## a dialoggpt model trained on french opensubtitles with custom tokenizer
|
| 12 |
+
trained with this notebook
|
| 13 |
+
https://colab.research.google.com/drive/1pfCV3bngAmISNZVfDvBMyEhQKuYw37Rl#scrollTo=AyImj9qZYLRi&uniqifier=3
|
| 14 |
+
|
| 15 |
+
config from microsoft/DialoGPT-medium
|
| 16 |
+
dataset generated from 2018 opensubtitle from opus folowing these guidelines
|
| 17 |
+
https://github.com/PolyAI-LDN/conversational-datasets/tree/master/opensubtitles with this notebook
|
| 18 |
+
https://colab.research.google.com/drive/1uyh3vJ9nEjqOHI68VD73qxt4olJzODxi#scrollTo=deaacv4XfLMk
|
| 19 |
+
### How to use
|
| 20 |
+
|
| 21 |
+
Now we are ready to try out how the model works as a chatting partner!
|
| 22 |
+
|
| 23 |
+
```python
|
| 24 |
+
import torch
|
| 25 |
+
from transformers import AutoTokenizer, AutoModelWithLMHead
|
| 26 |
+
|
| 27 |
+
tokenizer = AutoTokenizer.from_pretrained("cedpsam/chatbot_fr")
|
| 28 |
+
|
| 29 |
+
model = AutoModelWithLMHead.from_pretrained("cedpsam/chatbot_fr")
|
| 30 |
+
|
| 31 |
+
for step in range(6):
|
| 32 |
+
# encode the new user input, add the eos_token and return a tensor in Pytorch
|
| 33 |
+
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
|
| 34 |
+
# print(new_user_input_ids)
|
| 35 |
+
|
| 36 |
+
# append the new user input tokens to the chat history
|
| 37 |
+
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
|
| 38 |
+
|
| 39 |
+
# generated a response while limiting the total chat history to 1000 tokens,
|
| 40 |
+
chat_history_ids = model.generate(
|
| 41 |
+
bot_input_ids, max_length=1000,
|
| 42 |
+
pad_token_id=tokenizer.eos_token_id,
|
| 43 |
+
top_p=0.92, top_k = 50
|
| 44 |
+
)
|
| 45 |
+
|
| 46 |
+
# pretty print last ouput tokens from bot
|
| 47 |
+
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
|