yalali commited on
Commit
376218a
·
verified ·
1 Parent(s): b666515

Upload app.py

Browse files
Files changed (1) hide show
  1. app.py +109 -0
app.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # -*- coding: utf-8 -*-
2
+ """app.ipynb
3
+
4
+ Automatically generated by Colab.
5
+
6
+ Original file is located at
7
+ https://colab.research.google.com/drive/14vp_MnMkFmCm_l4xkpQa0lIN07A4Z1RQ
8
+ """
9
+
10
+
11
+
12
+ """# Task
13
+ Develop a Gradio application with a chatbot tab that uses a conversational model from Hugging Face to interact with users.
14
+
15
+ ## Set up the environment
16
+
17
+ ### Subtask:
18
+ Install the necessary libraries, such as `transformers` and `gradio`.
19
+
20
+ **Reasoning**:
21
+ The subtask requires installing the `transformers` and `gradio` libraries. I will use pip to install both libraries in a single code block.
22
+ """
23
+
24
+ !pip install transformers gradio
25
+
26
+ """## Load the model and tokenizer
27
+
28
+ ### Subtask:
29
+ Choose a suitable conversational model from Hugging Face and load it along with its tokenizer.
30
+
31
+ **Reasoning**:
32
+ Import the necessary classes and load the chosen conversational model and its tokenizer from Hugging Face.
33
+ """
34
+
35
+ from transformers import AutoModelForCausalLM, AutoTokenizer
36
+
37
+ model_name = "microsoft/DialoGPT-medium"
38
+ model = AutoModelForCausalLM.from_pretrained(model_name)
39
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
40
+
41
+ """## Define the chat function
42
+
43
+ ### Subtask:
44
+ Create a Python function that takes user input, processes it using the loaded model, and returns the chatbot's response.
45
+
46
+ **Reasoning**:
47
+ Define a function to handle user input, tokenize it, generate a response using the loaded model, and decode the response.
48
+ """
49
+
50
+ def chat_with_bot(user_input, history):
51
+ # The history from Gradio Chatbot is a list of [user_message, bot_message] pairs.
52
+ # We need to reconstruct the full conversation history.
53
+ full_conversation = ""
54
+ for user_msg, bot_msg in history:
55
+ full_conversation += user_msg + tokenizer.eos_token
56
+ full_conversation += bot_msg + tokenizer.eos_token
57
+
58
+ # Add the current user input to the conversation
59
+ full_conversation += user_input + tokenizer.eos_token
60
+
61
+ # Encode the full conversation
62
+ input_ids = tokenizer.encode(full_conversation, return_tensors="pt")
63
+
64
+ # Generate a response from the model
65
+ # Pass the entire chat history tensor to the model for generation
66
+ output_ids = model.generate(input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
67
+
68
+ # Decode the model's response (excluding the input part)
69
+ response = tokenizer.decode(output_ids[:, input_ids.shape[-1]:][0], skip_special_tokens=True)
70
+
71
+ return response
72
+
73
+ """## Build the gradio interface
74
+
75
+ ### Subtask:
76
+ Use Gradio to create a conversational interface that connects the chat function to the user interface.
77
+
78
+ **Reasoning**:
79
+ Create a Gradio interface that connects the chat function to the user interface as described in the instructions.
80
+ """
81
+
82
+ import gradio as gr
83
+
84
+ iface = gr.ChatInterface(fn=chat_with_bot,
85
+ title="Hugging Face Conversational Chatbot")
86
+
87
+ """**Reasoning**:
88
+ Launch the Gradio interface to make it available for interaction.
89
+
90
+
91
+ """
92
+
93
+ iface.launch()
94
+
95
+ """## Summary:
96
+
97
+ ### Data Analysis Key Findings
98
+
99
+ * The task successfully installed the necessary libraries (`transformers` and `gradio`).
100
+ * A conversational model (`microsoft/DialoGPT-medium`) and its tokenizer were successfully loaded from Hugging Face.
101
+ * A Python function `chat_with_bot` was created to process user input using the loaded model and return a response.
102
+ * A Gradio interface was built and launched, connecting the `chat_with_bot` function to a user-friendly interface with input and output textboxes.
103
+
104
+ ### Insights or Next Steps
105
+
106
+ * The current implementation uses a fixed model. Future work could explore allowing users to choose different conversational models.
107
+ * The chat function does not currently maintain conversation history, which limits the chatbot's ability to have coherent multi-turn conversations. Adding memory to the chat function would be a valuable improvement.
108
+
109
+ """