SeaWolf-AI commited on
Commit
a1fedd9
·
verified ·
1 Parent(s): b88cee1

feat: initial Hermes Agent data analysis demo app

Browse files
Files changed (3) hide show
  1. README.md +26 -6
  2. app.py +390 -0
  3. requirements.txt +5 -0
README.md CHANGED
@@ -1,12 +1,32 @@
1
  ---
2
- title: Hermes Agent Data Analysis
3
- emoji: 🚀
4
- colorFrom: pink
5
- colorTo: yellow
6
  sdk: gradio
7
- sdk_version: 6.11.0
8
  app_file: app.py
9
  pinned: false
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Hermes Agent - Data Analysis
3
+ emoji: "\U0001F52E"
4
+ colorFrom: purple
5
+ colorTo: indigo
6
  sdk: gradio
7
+ sdk_version: 5.29.0
8
  app_file: app.py
9
  pinned: false
10
+ license: mit
11
  ---
12
 
13
+ # Hermes Agent - Data Analysis Demo
14
+
15
+ An interactive demo showcasing the data analysis capabilities of [Hermes Agent](https://github.com/NousResearch/hermes-agent) by Nous Research.
16
+
17
+ ## Features
18
+
19
+ - **AI Chat**: Converse with the Hermes Agent via an OpenAI-compatible API
20
+ - **Data Analysis**: Upload CSV/JSON files and ask natural language questions about your data
21
+ - **Visualization**: Auto-generated charts and statistical summaries
22
+ - **Code Display**: See the Python code the agent generates for analysis
23
+
24
+ ## Configuration
25
+
26
+ Set the following environment variables in your Space settings:
27
+
28
+ | Variable | Description | Default |
29
+ |----------|-------------|---------|
30
+ | `OPENAI_API_KEY` | API key for the LLM provider | (required) |
31
+ | `OPENAI_BASE_URL` | Base URL for the API endpoint | `https://openrouter.ai/api/v1` |
32
+ | `OPENAI_MODEL` | Model to use | `anthropic/claude-sonnet-4` |
app.py ADDED
@@ -0,0 +1,390 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Hermes Agent - Data Analysis Demo for Hugging Face Spaces.
3
+
4
+ Provides a Gradio web UI with:
5
+ 1. AI Chat tab - converse with an LLM via OpenAI-compatible API
6
+ 2. Data Analysis tab - upload CSV/JSON, ask questions, get charts & stats
7
+ """
8
+
9
+ import io
10
+ import os
11
+ import re
12
+ import uuid
13
+ import traceback
14
+
15
+ import gradio as gr
16
+ import pandas as pd
17
+ import matplotlib
18
+ matplotlib.use("Agg")
19
+ import matplotlib.pyplot as plt
20
+ import plotly.express as px
21
+ from openai import OpenAI
22
+
23
+ # ---------------------------------------------------------------------------
24
+ # Configuration
25
+ # ---------------------------------------------------------------------------
26
+ API_KEY = os.getenv("OPENAI_API_KEY", "")
27
+ BASE_URL = os.getenv("OPENAI_BASE_URL", "https://openrouter.ai/api/v1")
28
+ MODEL = os.getenv("OPENAI_MODEL", "anthropic/claude-sonnet-4")
29
+
30
+ SYSTEM_PROMPT = """You are Hermes, an expert data analyst AI assistant built by Nous Research.
31
+ You help users analyze data, create visualizations, and extract insights.
32
+
33
+ When analyzing data, you MUST respond with executable Python code wrapped in ```python blocks.
34
+ The code should:
35
+ - Use the variable `df` which contains the uploaded pandas DataFrame
36
+ - Use matplotlib or plotly for visualizations
37
+ - Print results using print()
38
+ - Save any matplotlib figures to 'output.png' using plt.savefig('output.png', dpi=150, bbox_inches='tight')
39
+ - Be self-contained and runnable
40
+
41
+ When chatting without data, respond naturally and helpfully."""
42
+
43
+ DATA_ANALYSIS_PROMPT = """You are Hermes, an expert data analyst.
44
+ The user has uploaded a dataset. Here is the data summary:
45
+
46
+ {summary}
47
+
48
+ First 5 rows:
49
+ {head}
50
+
51
+ Column types:
52
+ {dtypes}
53
+
54
+ The user's question: {question}
55
+
56
+ Respond with:
57
+ 1. A brief explanation of your analysis approach
58
+ 2. Python code in a ```python block that:
59
+ - Uses the pre-loaded `df` DataFrame
60
+ - Answers the question with analysis/visualization
61
+ - Prints key findings with print()
62
+ - If creating a plot, saves it to 'output.png' using plt.savefig('output.png', dpi=150, bbox_inches='tight')
63
+ - Uses plt.close() after saving"""
64
+
65
+
66
+ def get_client() -> OpenAI:
67
+ """Create an OpenAI client with current settings."""
68
+ return OpenAI(api_key=API_KEY, base_url=BASE_URL)
69
+
70
+
71
+ # ---------------------------------------------------------------------------
72
+ # Chat Tab
73
+ # ---------------------------------------------------------------------------
74
+ def chat_respond(message: str, history: list[dict], session_id: str):
75
+ """Stream a chat response from the LLM."""
76
+ if not API_KEY:
77
+ yield "Please set the `OPENAI_API_KEY` environment variable in your Space settings."
78
+ return
79
+
80
+ client = get_client()
81
+ messages = [{"role": "system", "content": SYSTEM_PROMPT}]
82
+ for entry in history:
83
+ messages.append({"role": entry["role"], "content": entry["content"]})
84
+ messages.append({"role": "user", "content": message})
85
+
86
+ try:
87
+ stream = client.chat.completions.create(
88
+ model=MODEL,
89
+ messages=messages,
90
+ stream=True,
91
+ max_tokens=4096,
92
+ extra_headers={"X-Hermes-Session-Id": session_id},
93
+ )
94
+ partial = ""
95
+ for chunk in stream:
96
+ delta = chunk.choices[0].delta.content
97
+ if delta:
98
+ partial += delta
99
+ yield partial
100
+ except Exception as e:
101
+ yield f"Error: {e}"
102
+
103
+
104
+ # ---------------------------------------------------------------------------
105
+ # Data Analysis Tab
106
+ # ---------------------------------------------------------------------------
107
+ def load_data(file) -> tuple[pd.DataFrame | None, str]:
108
+ """Load a CSV or JSON file into a DataFrame."""
109
+ if file is None:
110
+ return None, "No file uploaded."
111
+ try:
112
+ path = file.name if hasattr(file, "name") else str(file)
113
+ if path.endswith(".json"):
114
+ df = pd.read_json(path)
115
+ else:
116
+ df = pd.read_csv(path)
117
+ summary = (
118
+ f"**Rows:** {len(df):,} | **Columns:** {len(df.columns)}\n\n"
119
+ f"**Columns:** {', '.join(df.columns.tolist())}"
120
+ )
121
+ return df, summary
122
+ except Exception as e:
123
+ return None, f"Failed to load file: {e}"
124
+
125
+
126
+ def get_data_summary(df: pd.DataFrame) -> str:
127
+ """Generate a text summary of a DataFrame for the LLM prompt."""
128
+ buf = io.StringIO()
129
+ df.describe(include="all").to_string(buf)
130
+ return buf.getvalue()
131
+
132
+
133
+ def extract_code(text: str) -> str:
134
+ """Extract Python code from markdown code blocks."""
135
+ pattern = r"```python\s*\n(.*?)```"
136
+ matches = re.findall(pattern, text, re.DOTALL)
137
+ return matches[0].strip() if matches else ""
138
+
139
+
140
+ def execute_analysis(code: str, df: pd.DataFrame) -> tuple[str, str | None]:
141
+ """Execute analysis code safely and return output + optional image path."""
142
+ output_capture = io.StringIO()
143
+ image_path = None
144
+
145
+ # Clean up any previous output
146
+ if os.path.exists("output.png"):
147
+ os.remove("output.png")
148
+
149
+ local_vars = {"df": df.copy(), "pd": pd, "plt": plt, "px": px}
150
+
151
+ try:
152
+ exec_globals = {"__builtins__": __builtins__}
153
+ exec_globals.update(local_vars)
154
+
155
+ # Redirect print to capture
156
+ import contextlib
157
+ with contextlib.redirect_stdout(output_capture):
158
+ exec(code, exec_globals)
159
+
160
+ if os.path.exists("output.png"):
161
+ image_path = "output.png"
162
+
163
+ except Exception:
164
+ output_capture.write(f"\nExecution Error:\n{traceback.format_exc()}")
165
+
166
+ plt.close("all")
167
+ return output_capture.getvalue(), image_path
168
+
169
+
170
+ def analyze_data(
171
+ file, question: str, history: list[dict]
172
+ ) -> tuple[list[dict], str, str | None, str]:
173
+ """Main analysis pipeline: upload data, ask question, get results."""
174
+ if not API_KEY:
175
+ msg = "Please set the `OPENAI_API_KEY` environment variable."
176
+ return history + [{"role": "assistant", "content": msg}], "", None, ""
177
+
178
+ df, summary_md = load_data(file)
179
+ if df is None:
180
+ return (
181
+ history + [{"role": "assistant", "content": summary_md}],
182
+ "",
183
+ None,
184
+ "",
185
+ )
186
+
187
+ if not question.strip():
188
+ return (
189
+ history
190
+ + [
191
+ {
192
+ "role": "assistant",
193
+ "content": f"Data loaded successfully!\n\n{summary_md}\n\nAsk me a question about this data.",
194
+ }
195
+ ],
196
+ "",
197
+ None,
198
+ "",
199
+ )
200
+
201
+ # Build prompt with data context
202
+ prompt = DATA_ANALYSIS_PROMPT.format(
203
+ summary=get_data_summary(df),
204
+ head=df.head().to_string(),
205
+ dtypes=df.dtypes.to_string(),
206
+ question=question,
207
+ )
208
+
209
+ client = get_client()
210
+ messages = [{"role": "system", "content": SYSTEM_PROMPT}]
211
+ for entry in history:
212
+ messages.append({"role": entry["role"], "content": entry["content"]})
213
+ messages.append({"role": "user", "content": prompt})
214
+
215
+ try:
216
+ response = client.chat.completions.create(
217
+ model=MODEL,
218
+ messages=messages,
219
+ max_tokens=4096,
220
+ )
221
+ answer = response.choices[0].message.content or ""
222
+ except Exception as e:
223
+ answer = f"API Error: {e}"
224
+ return (
225
+ history
226
+ + [{"role": "user", "content": question}, {"role": "assistant", "content": answer}],
227
+ "",
228
+ None,
229
+ "",
230
+ )
231
+
232
+ # Extract and execute code
233
+ code = extract_code(answer)
234
+ output_text = ""
235
+ image_path = None
236
+
237
+ if code:
238
+ output_text, image_path = execute_analysis(code, df)
239
+
240
+ updated_history = history + [
241
+ {"role": "user", "content": question},
242
+ {"role": "assistant", "content": answer},
243
+ ]
244
+
245
+ return updated_history, output_text, image_path, code
246
+
247
+
248
+ # ---------------------------------------------------------------------------
249
+ # Gradio UI
250
+ # ---------------------------------------------------------------------------
251
+ def create_app():
252
+ session_id = str(uuid.uuid4())
253
+
254
+ with gr.Blocks(
255
+ title="Hermes Agent - Data Analysis",
256
+ theme=gr.themes.Soft(primary_hue="purple"),
257
+ ) as demo:
258
+ gr.Markdown(
259
+ "# Hermes Agent - Data Analysis Demo\n"
260
+ "An interactive demo of [Hermes Agent](https://github.com/NousResearch/hermes-agent) "
261
+ "by Nous Research."
262
+ )
263
+
264
+ with gr.Tabs():
265
+ # --- Chat Tab ---
266
+ with gr.Tab("AI Chat"):
267
+ gr.ChatInterface(
268
+ fn=chat_respond,
269
+ type="messages",
270
+ additional_inputs=[
271
+ gr.State(value=session_id),
272
+ ],
273
+ title="Chat with Hermes",
274
+ description="Ask anything - general questions, coding help, or data analysis guidance.",
275
+ examples=[
276
+ "What kinds of data analysis can you help me with?",
277
+ "Explain the difference between correlation and causation.",
278
+ "Write Python code to generate a sample dataset with pandas.",
279
+ ],
280
+ )
281
+
282
+ # --- Data Analysis Tab ---
283
+ with gr.Tab("Data Analysis"):
284
+ gr.Markdown(
285
+ "Upload a CSV or JSON file and ask questions about your data. "
286
+ "The agent will analyze it and generate visualizations."
287
+ )
288
+
289
+ with gr.Row():
290
+ with gr.Column(scale=1):
291
+ file_input = gr.File(
292
+ label="Upload CSV or JSON",
293
+ file_types=[".csv", ".json"],
294
+ )
295
+ data_summary = gr.Markdown(label="Data Summary")
296
+ question_input = gr.Textbox(
297
+ label="Ask a question about your data",
298
+ placeholder="e.g., Show the distribution of values in column X",
299
+ lines=2,
300
+ )
301
+ analyze_btn = gr.Button("Analyze", variant="primary")
302
+
303
+ with gr.Column(scale=2):
304
+ chatbot = gr.Chatbot(
305
+ label="Analysis Conversation",
306
+ type="messages",
307
+ height=300,
308
+ )
309
+ with gr.Row():
310
+ with gr.Column():
311
+ output_text = gr.Textbox(
312
+ label="Execution Output",
313
+ lines=8,
314
+ interactive=False,
315
+ )
316
+ with gr.Column():
317
+ output_image = gr.Image(
318
+ label="Visualization",
319
+ type="filepath",
320
+ )
321
+ code_display = gr.Code(
322
+ label="Generated Code",
323
+ language="python",
324
+ interactive=False,
325
+ )
326
+
327
+ # Wire up data loading preview
328
+ file_input.change(
329
+ fn=lambda f: load_data(f)[1],
330
+ inputs=[file_input],
331
+ outputs=[data_summary],
332
+ )
333
+
334
+ # Wire up analysis
335
+ chat_state = gr.State(value=[])
336
+
337
+ analyze_btn.click(
338
+ fn=analyze_data,
339
+ inputs=[file_input, question_input, chat_state],
340
+ outputs=[chat_state, output_text, output_image, code_display],
341
+ ).then(
342
+ fn=lambda h: h,
343
+ inputs=[chat_state],
344
+ outputs=[chatbot],
345
+ )
346
+
347
+ question_input.submit(
348
+ fn=analyze_data,
349
+ inputs=[file_input, question_input, chat_state],
350
+ outputs=[chat_state, output_text, output_image, code_display],
351
+ ).then(
352
+ fn=lambda h: h,
353
+ inputs=[chat_state],
354
+ outputs=[chatbot],
355
+ )
356
+
357
+ # --- About Tab ---
358
+ with gr.Tab("About"):
359
+ gr.Markdown("""
360
+ ## About Hermes Agent
361
+
362
+ **Hermes Agent** is a self-improving AI agent framework by [Nous Research](https://nousresearch.com).
363
+
364
+ ### Key Features
365
+ - **Self-Learning Loop**: Creates skills from experience and improves them during use
366
+ - **Model Agnostic**: Works with OpenAI, Anthropic, OpenRouter, and custom endpoints
367
+ - **Multi-Platform**: Accessible via CLI, Telegram, Discord, Slack, WhatsApp, and 14+ platforms
368
+ - **40+ Built-in Tools**: Terminal, file operations, web search, browser automation, code execution
369
+ - **26 Bundled Skills**: Data science, DevOps, research, and more
370
+
371
+ ### Configuration
372
+ Set these environment variables in your Space settings:
373
+
374
+ | Variable | Description |
375
+ |----------|-------------|
376
+ | `OPENAI_API_KEY` | Your API key |
377
+ | `OPENAI_BASE_URL` | API endpoint (default: OpenRouter) |
378
+ | `OPENAI_MODEL` | Model name (default: anthropic/claude-sonnet-4) |
379
+
380
+ ### Links
381
+ - [GitHub Repository](https://github.com/NousResearch/hermes-agent)
382
+ - [Documentation](https://docs.hermes.nousresearch.com)
383
+ """)
384
+
385
+ return demo
386
+
387
+
388
+ if __name__ == "__main__":
389
+ demo = create_app()
390
+ demo.launch(server_name="0.0.0.0", server_port=7860)
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ gradio>=5.0,<6
2
+ openai>=2.21.0,<3
3
+ pandas>=2.0,<3
4
+ matplotlib>=3.7,<4
5
+ plotly>=5.18,<6