File size: 1,952 Bytes
35237be 054a82f 7870c1a 056ea71 55b4053 056ea71 55b4053 056ea71 55b4053 056ea71 6d16d02 056ea71 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 | ---
pipeline_tag: conversational
language:
- en
library_name: transformers
---
This is Conversational chatbot built on top of DialoGPT-large with the inclusion of Harry Potter scripts, downloaded from [Kaggle here](https://www.kaggle.com/datasets/gulsahdemiryurek/harry-potter-dataset).
The script is merged from 3 Harry Potter movies
Thanks to Lynn Zhang for her [tutorial here](https://www.freecodecamp.org/news/discord-ai-chatbot/) that inspired me to build this chatbot.
## How to run the model
Due to limitation in cloud computing from Hugging Face it might not be able to run the deployed model here, so I download the model to run on my local HPC system.
Here is the script that I used to enable the 4 line chat:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("vuminhtue/DialoGPT-large-HarryPotter3")
model = AutoModelForCausalLM.from_pretrained("vuminhtue/DialoGPT-large-HarryPotter3")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=10,
top_p=0.5,
temperature=0.5
)
# pretty print last ouput tokens from bot
print("HarryPotter_Bot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|