genaitiwari commited on
Commit
2d8e944
Β·
1 Parent(s): 54b7721

update readmee

Browse files
Files changed (2) hide show
  1. README.md +148 -0
  2. deepseek_output.png +0 -0
README.md CHANGED
@@ -10,3 +10,151 @@ pinned: true
10
  license: apache-2.0
11
  short_description: deepseek chat - huggingface api
12
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  license: apache-2.0
11
  short_description: deepseek chat - huggingface api
12
  ---
13
+ Below is the content for a `README.md` file that you can include in your project to provide an overview of the code, its requirements, and how to use it:
14
+
15
+ ---
16
+
17
+ # 🐳 Chat with DeepSeek 🐳
18
+
19
+ This is a Streamlit-based web application that allows users to interact with the **DeepSeek-R1-Distill-Qwen-32B** model hosted on Hugging Face. The application uses the Hugging Face Inference API to generate responses to user queries.
20
+
21
+ ---
22
+ ## πŸ§‘β€πŸ’» Author
23
+ Sachin Tiwari
24
+
25
+ ---
26
+ ## πŸš€ Features
27
+
28
+ - **Interactive Chat Interface**: Users can input questions and receive responses from the DeepSeek model.
29
+ - **Thinking Process Visualization**: If the model's response contains `<think>` tags, the app displays the thinking process in an expandable section.
30
+ - **Secure API Key Input**: Users can securely input their Hugging Face API key to authenticate and use the model.
31
+
32
+ ---
33
+
34
+ ## πŸ“‹ Requirements
35
+
36
+ To run this application, you need the following Python packages:
37
+
38
+ - `streamlit`: For building the web interface.
39
+ - `huggingface_hub`: For interacting with the Hugging Face Inference API.
40
+
41
+ You can install the required packages using the following command:
42
+
43
+ ```bash
44
+ pip install streamlit huggingface_hub
45
+ ```
46
+
47
+ ---
48
+
49
+ ## πŸ› οΈ How to Use
50
+
51
+ 1. **Clone the Repository**:
52
+ ```bash
53
+ git clone <https://github.com/aitiwari/Deepseek_Chat_HF.git>
54
+ cd <Deepseek_Chat_HF>
55
+ ```
56
+
57
+ 2. **Run the Application**:
58
+ Start the Streamlit app by running the following command:
59
+ ```bash
60
+ streamlit run app.py
61
+ ```
62
+
63
+ 3. **Enter Your Hugging Face API Key**:
64
+ - Open the app in your browser.
65
+ - In the sidebar, enter your Hugging Face API key in the provided input box.
66
+
67
+ 4. **Chat with DeepSeek**:
68
+ - Once the API key is entered, you can start chatting with the DeepSeek model by typing your questions in the chat input box.
69
+
70
+ ---
71
+
72
+ ## πŸ§‘β€πŸ’» Code Overview
73
+
74
+ The application is built using the following components:
75
+
76
+ - **Streamlit UI**: The interface is created using Streamlit, with a sidebar for API key input and a main area for chat interactions.
77
+ - **Hugging Face Inference API**: The `InferenceClient` from the `huggingface_hub` library is used to send requests to the DeepSeek model.
78
+ - **Response Parsing**: The app checks for `<think>` tags in the model's response and displays the thinking process separately.
79
+
80
+ Here’s the main code snippet:
81
+
82
+ ```python
83
+ import streamlit as st
84
+ from huggingface_hub import InferenceClient
85
+
86
+ # Streamlit UI
87
+ st.title("🐳 Chat with DeepSeek 🐳")
88
+
89
+ with st.sidebar:
90
+ # Input box for user to enter their Hugging Face API key
91
+ api_key = st.text_input("Enter your Hugging Face API Key:", type="password")
92
+
93
+ if api_key:
94
+ # Initialize the InferenceClient with the user-provided API key
95
+ client = InferenceClient(api_key=api_key)
96
+
97
+ # Input box for user to enter their question
98
+ user_input = st.chat_input("Enter your question:")
99
+
100
+
101
+ if user_input:
102
+ # Prepare the messages for the model
103
+ messages = [
104
+ {
105
+ "role": "user",
106
+ "content": user_input
107
+ }
108
+ ]
109
+ with st.chat_message("user"):
110
+ st.write(user_input)
111
+
112
+ # Get the completion from the model
113
+ completion = client.chat.completions.create(
114
+ model="deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
115
+ messages=messages,
116
+ )
117
+
118
+ # Get the model's response
119
+ response = completion.choices[0].message['content']
120
+
121
+ # Check if the response contains <think> tags
122
+ if "<think>" in response and "</think>" in response:
123
+ # Extract content within <think> tags
124
+ think_content = response.split("<think>")[1].split("</think>")[0].strip()
125
+ # Display the thinking content in an expander
126
+ with st.expander("Thinking..."):
127
+ st.write(think_content)
128
+
129
+ # Extract the rest of the response (outside <think> tags)
130
+ rest_of_response = response.split("</think>")[1].strip()
131
+ # Display the rest of the response with an AI icon
132
+ with st.chat_message("ai"):
133
+ st.write(rest_of_response)
134
+ else:
135
+ # If no <think> tags, display the entire response with an AI icon
136
+ with st.chat_message("ai"):
137
+ st.write(rest_of_response)
138
+ else:
139
+ with st.sidebar:
140
+ st.warning("Please enter your Hugging Face API Key to proceed.")
141
+ ```
142
+
143
+ ---
144
+
145
+ ## Output
146
+ - **Huggingface Space**: [Link](https://huggingface.co/spaces/genaitiwari/Deepseek_Chat)
147
+
148
+ ![alt text](deepseek_output.png)
149
+ ## πŸ“ Notes
150
+
151
+ - **API Key Security**: Ensure that your Hugging Face API key is kept secure and not shared publicly.
152
+ - **Model Limitations**: The responses generated by the DeepSeek model are based on its training data and may not always be accurate or complete.
153
+
154
+ ---
155
+
156
+ ## πŸ“œ License
157
+
158
+ This project is licensed under the MIT License. See the [LICENSE](LICENSE) file for details.
159
+
160
+ ---
deepseek_output.png ADDED