File size: 2,946 Bytes
376218a | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 |
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "microsoft/DialoGPT-medium"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
"""## Define the chat function
### Subtask:
Create a Python function that takes user input, processes it using the loaded model, and returns the chatbot's response.
**Reasoning**:
Define a function to handle user input, tokenize it, generate a response using the loaded model, and decode the response.
"""
def chat_with_bot(user_input, history):
# The history from Gradio Chatbot is a list of [user_message, bot_message] pairs.
# We need to reconstruct the full conversation history.
full_conversation = ""
for user_msg, bot_msg in history:
full_conversation += user_msg + tokenizer.eos_token
full_conversation += bot_msg + tokenizer.eos_token
# Add the current user input to the conversation
full_conversation += user_input + tokenizer.eos_token
# Encode the full conversation
input_ids = tokenizer.encode(full_conversation, return_tensors="pt")
# Generate a response from the model
# Pass the entire chat history tensor to the model for generation
output_ids = model.generate(input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# Decode the model's response (excluding the input part)
response = tokenizer.decode(output_ids[:, input_ids.shape[-1]:][0], skip_special_tokens=True)
return response
"""## Build the gradio interface
### Subtask:
Use Gradio to create a conversational interface that connects the chat function to the user interface.
**Reasoning**:
Create a Gradio interface that connects the chat function to the user interface as described in the instructions.
"""
import gradio as gr
iface = gr.ChatInterface(fn=chat_with_bot,
title="Hugging Face Conversational Chatbot")
"""**Reasoning**:
Launch the Gradio interface to make it available for interaction.
"""
iface.launch()
"""## Summary:
### Data Analysis Key Findings
* The task successfully installed the necessary libraries (`transformers` and `gradio`).
* A conversational model (`microsoft/DialoGPT-medium`) and its tokenizer were successfully loaded from Hugging Face.
* A Python function `chat_with_bot` was created to process user input using the loaded model and return a response.
* A Gradio interface was built and launched, connecting the `chat_with_bot` function to a user-friendly interface with input and output textboxes.
### Insights or Next Steps
* The current implementation uses a fixed model. Future work could explore allowing users to choose different conversational models.
* The chat function does not currently maintain conversation history, which limits the chatbot's ability to have coherent multi-turn conversations. Adding memory to the chat function would be a valuable improvement.
""" |