pratikcsv commited on
Commit
0cbebee
·
1 Parent(s): b967fd3

added CI yaml

Browse files
.github/workflows/main.yml ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Sync to Hugging Face Space
2
+ on:
3
+ push:
4
+ branches: [main]
5
+
6
+ # to run this workflow manually from the Actions tab
7
+ workflow_dispatch:
8
+
9
+ jobs:
10
+ sync-to-hub:
11
+ runs-on: ubuntu-latest
12
+ steps:
13
+ - uses: actions/checkout@v3
14
+ with:
15
+ fetch-depth: 0
16
+ lfs: false
17
+
18
+ - name: Ignore large files
19
+ run : git filter-branch --index-filter 'git rm -rf --cached --ignore-unmatch "Rag_Documents/layout-parser-paper.pdf"' HEAD
20
+
21
+ - name: Push to hub
22
+ env:
23
+ HF_TOKEN: ${{ secrets.HF_TOKEN }}
24
+ run: git push --force https://bpratik:$HF_TOKEN@huggingface.co/spaces/bpratik/Chatbot main
25
+
26
+
README.md ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AI Chatbot with LangGraph and Streamlit
2
+
3
+ A powerful AI chatbot application built with LangGraph, LangChain, and Streamlit that supports multiple LLM providers including Groq and OpenAI.
4
+
5
+ ## 🚀 Features
6
+
7
+ - **Multi-LLM Support**: Choose between Groq and OpenAI models
8
+ - **Interactive Chat Interface**: Clean Streamlit-based chat UI
9
+ - **Persistent Chat History**: Conversations are maintained throughout the session
10
+ - **Configurable Models**: Easy model selection and configuration
11
+ - **Graph-Based Architecture**: Built with LangGraph for scalable conversation flows
12
+ - **Real-time Responses**: Streaming responses from AI models
13
+
14
+ ## 🛠️ Tech Stack
15
+
16
+ - **Frontend**: Streamlit
17
+ - **AI Framework**: LangChain + LangGraph
18
+ - **LLM Providers**:
19
+ - Groq (Llama, Gemma models)
20
+ - OpenAI (GPT-4o, GPT-4o-mini)
21
+ - **State Management**: LangGraph State
22
+ - **Configuration**: ConfigParser
23
+
24
+ ## 📦 Installation
25
+
26
+ ### Prerequisites
27
+
28
+ - Python 3.8+
29
+ - pip or conda
30
+
31
+ ### Setup
32
+
33
+ 1. **Clone the repository**
34
+ ```bash
35
+ git clone <your-repo-url>
36
+ cd AI-News
37
+ ```
38
+
39
+ 2. **Create a virtual environment**
40
+ ```bash
41
+ python -m venv venv
42
+ source venv/bin/activate # On Windows: venv\Scripts\activate
43
+ ```
44
+
45
+ 3. **Install dependencies**
46
+ ```bash
47
+ pip install -r requirements.txt
48
+ ```
49
+
50
+ 4. **Set up API Keys**
51
+
52
+ You have two options:
53
+
54
+ **Option A: Environment Variables (Recommended)**
55
+ ```bash
56
+ export GROQ_API_KEY="your_groq_api_key_here"
57
+ export OPENAI_API_KEY="your_openai_api_key_here"
58
+ ```
59
+
60
+ **Option B: Enter in UI**
61
+ - Leave environment variables empty
62
+ - Enter API keys directly in the Streamlit sidebar
63
+
64
+ ## 🔑 Getting API Keys
65
+
66
+ ### Groq API Key
67
+ 1. Visit [Groq Console](https://console.groq.com)
68
+ 2. Sign up or log in
69
+ 3. Navigate to API Keys section
70
+ 4. Create a new API key
71
+
72
+ ### OpenAI API Key
73
+ 1. Visit [OpenAI Platform](https://platform.openai.com/account/api-keys)
74
+ 2. Sign up or log in
75
+ 3. Navigate to API Keys section
76
+ 4. Create a new API key
77
+
78
+ ## 🚀 Usage
79
+
80
+ 1. **Start the application**
81
+ ```bash
82
+ streamlit run app.py
83
+ ```
84
+
85
+ 2. **Access the app**
86
+ - Open your browser to `http://localhost:8501`
87
+
88
+ 3. **Configure the chatbot**
89
+ - Select your preferred LLM provider (Groq or OpenAI)
90
+ - Choose a model from the dropdown
91
+ - Enter your API key (if not set as environment variable)
92
+ - Select a use case
93
+
94
+ 4. **Start chatting**
95
+ - Type your message in the chat input at the bottom
96
+ - Press Enter to send
97
+ - View responses in the chat interface
98
+
99
+ ## 📁 Project Structure
100
+
101
+ ```
102
+ AI-News/
103
+ ├── app.py # Main application entry point
104
+ ├── requirements.txt # Python dependencies
105
+ ├── src/
106
+ │ ├── __init__.py
107
+ │ ├── main.py # Core application logic
108
+ │ ├── graph/
109
+ │ │ ├── __init__.py
110
+ │ │ └── graph_builder.py # LangGraph state graph builder
111
+ │ ├── llms/
112
+ │ │ ├── __init__.py
113
+ │ │ ├── groq.py # Groq LLM integration
114
+ │ │ └── openai.py # OpenAI LLM integration
115
+ │ ├── nodes/
116
+ │ │ ├── __init__.py
117
+ │ │ └── basic_chatbot.py # Chatbot node implementation
118
+ │ ├── state/
119
+ │ │ ├── __init__.py
120
+ │ │ └── state.py # LangGraph state definition
121
+ │ ├── ui/
122
+ │ │ ├── __init__.py
123
+ │ │ ├── config.ini # UI configuration
124
+ │ │ ├── config.py # Configuration loader
125
+ │ │ ├── display_results.py # Results display component
126
+ │ │ └── load.py # UI loader
127
+ │ ├── tools/
128
+ │ │ └── __init__.py
129
+ │ └── vectorstore/
130
+ └── __init__.py
131
+ ```
132
+
133
+ ## ⚙️ Configuration
134
+
135
+ The application can be configured through `src/ui/config.ini`:
136
+
137
+ ```ini
138
+ [DEFAULT]
139
+ Title = Basic Chatbot
140
+ USE_CASE = Basic Chatbot, Chatbot with Web Search
141
+ LLM_options = Groq, OpenAI
142
+ GROQ_MODEL = meta-llama/llama-4-scout-17b-16e-instruct, gemma2-9b-it, meta-llama/llama-4-maverick-17b-128e-instruct
143
+ OPENAI_MODEL = gpt-4o, gpt-4o-mini
144
+ ```
145
+
146
+ ## 🔧 Available Models
147
+
148
+ ### Groq Models
149
+ - `meta-llama/llama-4-scout-17b-16e-instruct`
150
+ - `gemma2-9b-it`
151
+ - `meta-llama/llama-4-maverick-17b-128e-instruct`
152
+
153
+ ### OpenAI Models
154
+ - `gpt-4o`
155
+ - `gpt-4o-mini`
156
+
157
+ ## 🐛 Troubleshooting
158
+
159
+ ### Common Issues
160
+
161
+ 1. **API Key Errors**
162
+ - Ensure your API key is valid and has sufficient credits
163
+ - Check if the API key is properly set in environment variables or entered in UI
164
+
165
+ 2. **Import Errors**
166
+ - Make sure all dependencies are installed: `pip install -r requirements.txt`
167
+ - Verify you're running from the correct directory
168
+
169
+ 3. **Model Not Found**
170
+ - Check if the model name in `config.ini` matches the provider's available models
171
+ - Ensure your API key has access to the selected model
172
+
173
+ 4. **Streamlit Issues**
174
+ - Clear Streamlit cache: `streamlit cache clear`
175
+ - Restart the application
176
+
177
+ ### Error Messages
178
+
179
+ - **"Failed to initialize the model"**: Check API key and model availability
180
+ - **"No use case selected"**: Select a use case from the sidebar dropdown
181
+ - **"Graph must have an entrypoint"**: This indicates a configuration issue - restart the app
182
+
183
+ ## 🤝 Contributing
184
+
185
+ 1. Fork the repository
186
+ 2. Create a feature branch (`git checkout -b feature/amazing-feature`)
187
+ 3. Commit your changes (`git commit -m 'Add some amazing feature'`)
188
+ 4. Push to the branch (`git push origin feature/amazing-feature`)
189
+ 5. Open a Pull Request
190
+
191
+ ## 📄 License
192
+
193
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
194
+
195
+ ## 🚧 Future Enhancements
196
+
197
+ - [ ] **Memory/History Implementation**: Add persistent conversation memory using LangChain's built-in memory features
198
+ - [ ] **Web Search Integration**: Implement web search capabilities for the chatbot
199
+ - [ ] **File Upload Support**: Allow users to upload and chat about documents
200
+ - [ ] **Multiple Conversation Sessions**: Support for multiple concurrent chat sessions
201
+ - [ ] **Custom Model Integration**: Support for additional LLM providers
202
+ - [ ] **Chat Export**: Export conversation history to various formats
203
+
204
+ ## 📞 Support
205
+
206
+ If you encounter any issues or have questions, please:
207
+ 1. Check the troubleshooting section above
208
+ 2. Search existing GitHub issues
209
+ 3. Create a new issue with detailed information about the problem
210
+
211
+ ## 🙏 Acknowledgments
212
+
213
+ - [LangChain](https://langchain.com/) for the AI framework
214
+ - [LangGraph](https://langchain-ai.github.io/langgraph/) for state graph implementation
215
+ - [Streamlit](https://streamlit.io/) for the web interface
216
+ - [Groq](https://groq.com/) for fast inference
217
+ - [OpenAI](https://openai.com/) for GPT models
src/__pycache__/main.cpython-312.pyc CHANGED
Binary files a/src/__pycache__/main.cpython-312.pyc and b/src/__pycache__/main.cpython-312.pyc differ
 
src/graph/graph_builder.py CHANGED
@@ -10,13 +10,14 @@ class GraphBuilder:
10
 
11
  """Class to build the state graph for the application."""
12
 
13
- def __init__(self, model):
14
  self.llm = model
 
15
  self.graph_builder = StateGraph(State)
16
 
17
  def basic_chatbot(self):
18
  """Initialize the basic chatbot node in the graph."""
19
- self.basic_chatbot_node = BasicChatbot(self.llm)
20
  self.graph_builder.add_node('basic_chatbot', self.basic_chatbot_node.process)
21
  self.graph_builder.add_edge(START, 'basic_chatbot')
22
  self.graph_builder.add_edge('basic_chatbot', END)
 
10
 
11
  """Class to build the state graph for the application."""
12
 
13
+ def __init__(self, model, session_id: str = "default"):
14
  self.llm = model
15
+ self.session_id = session_id
16
  self.graph_builder = StateGraph(State)
17
 
18
  def basic_chatbot(self):
19
  """Initialize the basic chatbot node in the graph."""
20
+ self.basic_chatbot_node = BasicChatbot(self.llm, self.session_id)
21
  self.graph_builder.add_node('basic_chatbot', self.basic_chatbot_node.process)
22
  self.graph_builder.add_edge(START, 'basic_chatbot')
23
  self.graph_builder.add_edge('basic_chatbot', END)
src/main.py CHANGED
@@ -6,7 +6,8 @@ from src.llms.groq import GroqLLM
6
  from src.llms.openai import OpenAILLM
7
  from src.graph.graph_builder import GraphBuilder
8
  from src.ui.display_results import DisplayResults
9
- from langchain_core.messages import HumanMessage
 
10
 
11
  def load_app():
12
  """
@@ -24,6 +25,14 @@ def load_app():
24
  if "messages" not in st.session_state:
25
  st.session_state.messages = []
26
 
 
 
 
 
 
 
 
 
27
  # Display chat history
28
  for message in st.session_state.messages:
29
  with st.chat_message(message["role"]):
@@ -66,13 +75,20 @@ def load_app():
66
  st.error("Error: No use case selected.")
67
  return
68
 
69
- graph_builder = GraphBuilder(model=model)
 
 
 
 
 
 
70
 
71
  try:
72
  graph = graph_builder.setup_graph(use_case=use_case)
73
 
74
- # Process the message through the graph
75
  ai_response = ""
 
76
  for event in graph.stream({'messages': [HumanMessage(content=user_message)]}):
77
  for value in event.values():
78
  ai_response = value['messages'].content
 
6
  from src.llms.openai import OpenAILLM
7
  from src.graph.graph_builder import GraphBuilder
8
  from src.ui.display_results import DisplayResults
9
+ from src.memory.memory_manager import MemoryManager
10
+ from langchain_core.messages import HumanMessage, AIMessage
11
 
12
  def load_app():
13
  """
 
25
  if "messages" not in st.session_state:
26
  st.session_state.messages = []
27
 
28
+ # Initialize session ID for memory
29
+ if "session_id" not in st.session_state:
30
+ st.session_state.session_id = f"user_{hash(str(st.session_state))}"
31
+
32
+ # Initialize memory manager
33
+ if "memory_manager" not in st.session_state:
34
+ st.session_state.memory_manager = MemoryManager()
35
+
36
  # Display chat history
37
  for message in st.session_state.messages:
38
  with st.chat_message(message["role"]):
 
75
  st.error("Error: No use case selected.")
76
  return
77
 
78
+ # Use memory-enabled model instead of regular model
79
+ memory_manager = st.session_state.memory_manager
80
+ memory_enabled_model, memory_config = memory_manager.create_memory_enabled_model(
81
+ model, st.session_state.session_id
82
+ )
83
+
84
+ graph_builder = GraphBuilder(model=memory_enabled_model, session_id=st.session_state.session_id)
85
 
86
  try:
87
  graph = graph_builder.setup_graph(use_case=use_case)
88
 
89
+ # Process the message through the graph with memory
90
  ai_response = ""
91
+ # Pass just the current message, memory will handle history
92
  for event in graph.stream({'messages': [HumanMessage(content=user_message)]}):
93
  for value in event.values():
94
  ai_response = value['messages'].content
src/memory/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ # Memory management module
src/memory/memory_manager.py ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from langchain_community.chat_message_histories import ChatMessageHistory
2
+ from langchain_core.chat_history import BaseChatMessageHistory
3
+ from langchain_core.runnables.history import RunnableWithMessageHistory
4
+
5
+ class MemoryManager:
6
+ """
7
+ Manages chat memory for the chatbot using LangChain's built-in memory features.
8
+ """
9
+
10
+ def __init__(self):
11
+ self.store = {}
12
+
13
+ def get_session_history(self, session_id: str) -> BaseChatMessageHistory:
14
+ """
15
+ Get or create a chat message history for a given session ID.
16
+
17
+ :param session_id: Unique identifier for the chat session
18
+ :return: ChatMessageHistory instance for the session
19
+ """
20
+ if session_id not in self.store:
21
+ self.store[session_id] = ChatMessageHistory()
22
+ return self.store[session_id]
23
+
24
+ def create_memory_enabled_model(self, model, session_id: str = "default"):
25
+ """
26
+ Create a model with message history enabled.
27
+
28
+ :param model: The LLM model to wrap with memory
29
+ :param session_id: Session ID for this conversation
30
+ :return: Model with message history
31
+ """
32
+ with_message_history = RunnableWithMessageHistory(
33
+ model,
34
+ self.get_session_history
35
+ )
36
+
37
+ config = {"configurable": {"session_id": session_id}}
38
+ return with_message_history, config
39
+
40
+ def clear_session(self, session_id: str):
41
+ """
42
+ Clear the message history for a specific session.
43
+
44
+ :param session_id: Session ID to clear
45
+ """
46
+ if session_id in self.store:
47
+ del self.store[session_id]
48
+
49
+ def get_all_sessions(self):
50
+ """
51
+ Get all active session IDs.
52
+
53
+ :return: List of session IDs
54
+ """
55
+ return list(self.store.keys())
src/nodes/basic_chatbot.py CHANGED
@@ -1,25 +1,38 @@
1
  from src.state.state import State
 
2
 
3
  class BasicChatbot:
4
  """
5
- Class to handle the basic chatbot functionality.
6
  """
7
 
8
- def __init__(self, model):
9
  """
10
- Initialize the BasicChatbot with the given model.
11
 
12
- :param model: The LLM to be used for the chatbot.
 
13
  """
14
  self.model = model
 
 
 
15
 
16
  def process(self, state):
17
  """
18
- Process the state to generate a response from the model.
19
 
20
  :param state: The current state of the chatbot.
21
  :return: The response generated by the model.
22
  """
23
 
24
- return {'messages': self.model.invoke(state['messages'])}
 
 
 
 
 
 
 
 
25
 
 
1
  from src.state.state import State
2
+ from langchain_core.messages import HumanMessage, AIMessage
3
 
4
  class BasicChatbot:
5
  """
6
+ Class to handle the basic chatbot functionality with memory.
7
  """
8
 
9
+ def __init__(self, model, session_id: str = "default"):
10
  """
11
+ Initialize the BasicChatbot with the given model and memory.
12
 
13
+ :param model: The LLM to be used for the chatbot (already memory-enabled).
14
+ :param session_id: Session ID for conversation memory
15
  """
16
  self.model = model
17
+ self.session_id = session_id
18
+ # Memory config for the model
19
+ self.memory_config = {"configurable": {"session_id": session_id}}
20
 
21
  def process(self, state):
22
  """
23
+ Process the state to generate a response from the model with memory.
24
 
25
  :param state: The current state of the chatbot.
26
  :return: The response generated by the model.
27
  """
28
 
29
+ # Get the messages from the state
30
+ messages = state['messages']
31
+ if not messages:
32
+ return state
33
+
34
+ # Use the memory-enabled model with session config
35
+ response = self.model.invoke(messages, config=self.memory_config)
36
+
37
+ return {'messages': response}
38
 
src/ui/__pycache__/load.cpython-312.pyc CHANGED
Binary files a/src/ui/__pycache__/load.cpython-312.pyc and b/src/ui/__pycache__/load.cpython-312.pyc differ
 
src/ui/load.py CHANGED
@@ -62,6 +62,25 @@ class LoadStreamlitUI:
62
  # Use Case Selection
63
  self.user_controls['Selected Use Case'] = st.selectbox('Select Use Case', use_case)
64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
65
 
66
  if 'state' not in st.session_state:
67
  st.session_state.state = self.initialize_session()
 
62
  # Use Case Selection
63
  self.user_controls['Selected Use Case'] = st.selectbox('Select Use Case', use_case)
64
 
65
+ # Memory Management Section
66
+ st.divider()
67
+ st.subheader("💭 Memory Management")
68
+
69
+ # Display current session ID
70
+ if "session_id" in st.session_state:
71
+ st.text(f"Session: {st.session_state.session_id[-8:]}") # Show last 8 characters
72
+
73
+ # Clear conversation button
74
+ if st.button("🗑️ Clear Conversation", help="Clear chat history and start fresh"):
75
+ if "messages" in st.session_state:
76
+ st.session_state.messages = []
77
+ if "memory_manager" in st.session_state and "session_id" in st.session_state:
78
+ # Clear the session from memory manager
79
+ st.session_state.memory_manager.clear_session(st.session_state.session_id)
80
+ # Create new session ID
81
+ st.session_state.session_id = f"user_{hash(str(st.session_state))}"
82
+ st.rerun()
83
+
84
 
85
  if 'state' not in st.session_state:
86
  st.session_state.state = self.initialize_session()