Spaces:
No application file
No application file
mathysgrapotte commited on
Commit ·
52a25ec
0
Parent(s):
Initial commit
Browse files- .gitignore +10 -0
- .python-version +1 -0
- README.md +91 -0
- data/fastqc.yaml +67 -0
- main.py +46 -0
- pyproject.toml +15 -0
- uv.lock +0 -0
.gitignore
ADDED
|
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Python-generated files
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[oc]
|
| 4 |
+
build/
|
| 5 |
+
dist/
|
| 6 |
+
wheels/
|
| 7 |
+
*.egg-info
|
| 8 |
+
|
| 9 |
+
# Virtual environments
|
| 10 |
+
.venv
|
.python-version
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
3.12
|
README.md
ADDED
|
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Hello World Agent with MCP
|
| 2 |
+
|
| 3 |
+
A simple demonstration of Tiny Agents using Gradio MCP server and local Ollama.
|
| 4 |
+
|
| 5 |
+
## What this does
|
| 6 |
+
|
| 7 |
+
This example creates:
|
| 8 |
+
- **Gradio MCP Server**: A simple server that provides a "hello world" function
|
| 9 |
+
- **Tiny Agent**: An agent that connects to your local Ollama endpoint and can use the MCP server's tools
|
| 10 |
+
|
| 11 |
+
## Prerequisites
|
| 12 |
+
|
| 13 |
+
1. **Ollama** running locally at `http://127.0.0.1:11434`
|
| 14 |
+
2. **qwen3:0.6b** model installed in Ollama
|
| 15 |
+
3. **Node.js and npm** for MCP remote connectivity
|
| 16 |
+
|
| 17 |
+
## Setup
|
| 18 |
+
|
| 19 |
+
### 1. Install Ollama and the model
|
| 20 |
+
```bash
|
| 21 |
+
# If you haven't already, install Ollama
|
| 22 |
+
# Then pull the model:
|
| 23 |
+
ollama pull qwen3:0.6b
|
| 24 |
+
```
|
| 25 |
+
|
| 26 |
+
### 2. Install Node.js dependencies
|
| 27 |
+
```bash
|
| 28 |
+
# Install mcp-remote globally
|
| 29 |
+
npm install -g mcp-remote
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
### 3. Install Python dependencies
|
| 33 |
+
```bash
|
| 34 |
+
# Using uv (recommended)
|
| 35 |
+
uv sync
|
| 36 |
+
|
| 37 |
+
# Or using pip
|
| 38 |
+
pip install -r requirements.txt
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
## Usage
|
| 42 |
+
|
| 43 |
+
### 1. Start Ollama
|
| 44 |
+
Make sure Ollama is running:
|
| 45 |
+
```bash
|
| 46 |
+
ollama serve
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
### 2. Run the agent
|
| 50 |
+
```bash
|
| 51 |
+
# Using uv
|
| 52 |
+
uv run python main.py
|
| 53 |
+
|
| 54 |
+
# Or using python directly
|
| 55 |
+
python main.py
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
### 3. Interact with the agent
|
| 59 |
+
Once started, you can:
|
| 60 |
+
- Type messages to chat with the agent
|
| 61 |
+
- Ask it to use the hello world function (e.g., "Can you greet Alice using your tool?")
|
| 62 |
+
- Type 'quit' to exit
|
| 63 |
+
|
| 64 |
+
## Example Interaction
|
| 65 |
+
|
| 66 |
+
```
|
| 67 |
+
🎉 Agent is ready! Type 'quit' to exit.
|
| 68 |
+
==================================================
|
| 69 |
+
|
| 70 |
+
👤 You: Can you greet Alice using your available tools?
|
| 71 |
+
|
| 72 |
+
🤖 Agent: I'll use the hello world function to greet Alice for you.
|
| 73 |
+
|
| 74 |
+
*Agent calls the hello_world_function with name="Alice"*
|
| 75 |
+
|
| 76 |
+
Hello, Alice! This message comes from the MCP server.
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
## How it works
|
| 80 |
+
|
| 81 |
+
1. **Gradio MCP Server**: Creates an MCP-enabled Gradio interface at `http://127.0.0.1:7860`
|
| 82 |
+
2. **MCP Protocol**: The server exposes the `hello_world_function` via MCP
|
| 83 |
+
3. **Tiny Agent**: Connects to both Ollama (for LLM) and the Gradio server (for tools)
|
| 84 |
+
4. **Tool Usage**: The agent can discover and use the hello world function when appropriate
|
| 85 |
+
|
| 86 |
+
## Troubleshooting
|
| 87 |
+
|
| 88 |
+
- **"Connection refused"**: Make sure Ollama is running (`ollama serve`)
|
| 89 |
+
- **"Model not found"**: Install the model (`ollama pull qwen3:0.6b`)
|
| 90 |
+
- **"mcp-remote not found"**: Install it with `npm install -g mcp-remote`
|
| 91 |
+
- **Port conflicts**: The Gradio server uses port 7860 by default
|
data/fastqc.yaml
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: fastqc
|
| 2 |
+
description: Run FastQC on sequenced reads
|
| 3 |
+
keywords:
|
| 4 |
+
- quality control
|
| 5 |
+
- qc
|
| 6 |
+
- adapters
|
| 7 |
+
- fastq
|
| 8 |
+
tools:
|
| 9 |
+
- fastqc:
|
| 10 |
+
description: |
|
| 11 |
+
FastQC gives general quality metrics about your reads.
|
| 12 |
+
It provides information about the quality score distribution
|
| 13 |
+
across your reads, the per base sequence content (%A/C/G/T).
|
| 14 |
+
|
| 15 |
+
You get information about adapter contamination and other
|
| 16 |
+
overrepresented sequences.
|
| 17 |
+
homepage: https://www.bioinformatics.babraham.ac.uk/projects/fastqc/
|
| 18 |
+
documentation: https://www.bioinformatics.babraham.ac.uk/projects/fastqc/Help/
|
| 19 |
+
licence: ["GPL-2.0-only"]
|
| 20 |
+
identifier: biotools:fastqc
|
| 21 |
+
input:
|
| 22 |
+
- - meta:
|
| 23 |
+
type: map
|
| 24 |
+
description: |
|
| 25 |
+
Groovy Map containing sample information
|
| 26 |
+
e.g. [ id:'test', single_end:false ]
|
| 27 |
+
- reads:
|
| 28 |
+
type: file
|
| 29 |
+
description: |
|
| 30 |
+
List of input FastQ files of size 1 and 2 for single-end and paired-end data,
|
| 31 |
+
respectively.
|
| 32 |
+
output:
|
| 33 |
+
- html:
|
| 34 |
+
- meta:
|
| 35 |
+
type: map
|
| 36 |
+
description: |
|
| 37 |
+
Groovy Map containing sample information
|
| 38 |
+
e.g. [ id:'test', single_end:false ]
|
| 39 |
+
- "*.html":
|
| 40 |
+
type: file
|
| 41 |
+
description: FastQC report
|
| 42 |
+
pattern: "*_{fastqc.html}"
|
| 43 |
+
- zip:
|
| 44 |
+
- meta:
|
| 45 |
+
type: map
|
| 46 |
+
description: |
|
| 47 |
+
Groovy Map containing sample information
|
| 48 |
+
e.g. [ id:'test', single_end:false ]
|
| 49 |
+
- "*.zip":
|
| 50 |
+
type: file
|
| 51 |
+
description: FastQC report archive
|
| 52 |
+
pattern: "*_{fastqc.zip}"
|
| 53 |
+
- versions:
|
| 54 |
+
- versions.yml:
|
| 55 |
+
type: file
|
| 56 |
+
description: File containing software versions
|
| 57 |
+
pattern: "versions.yml"
|
| 58 |
+
authors:
|
| 59 |
+
- "@drpatelh"
|
| 60 |
+
- "@grst"
|
| 61 |
+
- "@ewels"
|
| 62 |
+
- "@FelixKrueger"
|
| 63 |
+
maintainers:
|
| 64 |
+
- "@drpatelh"
|
| 65 |
+
- "@grst"
|
| 66 |
+
- "@ewels"
|
| 67 |
+
- "@FelixKrueger"
|
main.py
ADDED
|
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from smolagents import CodeAgent, LiteLLMModel
|
| 2 |
+
from smolagents.tools import ToolCollection
|
| 3 |
+
import gradio as gr
|
| 4 |
+
|
| 5 |
+
def chat_with_agent(message, history):
|
| 6 |
+
"""Initialize MCP client for each request to avoid connection issues"""
|
| 7 |
+
try:
|
| 8 |
+
with ToolCollection.from_mcp(
|
| 9 |
+
{"url": "https://notredameslab-nf-ontology.hf.space/gradio_api/mcp/sse", "transport": "sse"},
|
| 10 |
+
trust_remote_code=True # Acknowledge that we trust this remote MCP server
|
| 11 |
+
) as tool_collection:
|
| 12 |
+
|
| 13 |
+
model = LiteLLMModel(
|
| 14 |
+
model_id="ollama/devstral:latest",
|
| 15 |
+
api_base="http://localhost:11434",
|
| 16 |
+
)
|
| 17 |
+
|
| 18 |
+
agent = CodeAgent(
|
| 19 |
+
tools=tool_collection.tools,
|
| 20 |
+
model=model,
|
| 21 |
+
additional_authorized_imports=["inspect", "json"]
|
| 22 |
+
)
|
| 23 |
+
|
| 24 |
+
additional_instructions = """
|
| 25 |
+
ADDITIONAL IMPORTANT INSTRUCTIONS:
|
| 26 |
+
use the tool "final_answer" in the code block to provide the answer to the user. Prints are only for debugging purposes. So, to give your results concatenate everything you want to print in a single "final_answer" call as such : final_answer(f"your answer here").
|
| 27 |
+
"""
|
| 28 |
+
|
| 29 |
+
agent.system_prompt += additional_instructions
|
| 30 |
+
|
| 31 |
+
result = agent.run(message)
|
| 32 |
+
return str(result)
|
| 33 |
+
|
| 34 |
+
except Exception as e:
|
| 35 |
+
return f"❌ Error: {e}\nType: {type(e).__name__}"
|
| 36 |
+
|
| 37 |
+
if __name__ == "__main__":
|
| 38 |
+
demo = gr.ChatInterface(
|
| 39 |
+
fn=chat_with_agent,
|
| 40 |
+
type="messages",
|
| 41 |
+
examples=["can you extract input/output metadata from fastqc nf-core module ?"],
|
| 42 |
+
title="Agent with MCP Tools (Per-Request Connection)",
|
| 43 |
+
description="This version creates a new MCP connection for each request."
|
| 44 |
+
)
|
| 45 |
+
|
| 46 |
+
demo.launch()
|
pyproject.toml
ADDED
|
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[project]
|
| 2 |
+
name = "agent-ontology"
|
| 3 |
+
version = "0.1.0"
|
| 4 |
+
description = "Add your description here"
|
| 5 |
+
readme = "README.md"
|
| 6 |
+
requires-python = ">=3.12"
|
| 7 |
+
dependencies = [
|
| 8 |
+
"fastmcp>=2.6.1",
|
| 9 |
+
"gradio[mcp]>=5.0.0",
|
| 10 |
+
"huggingface_hub[mcp]>=0.32.2",
|
| 11 |
+
"mcp>=1.9.2",
|
| 12 |
+
"requests",
|
| 13 |
+
"smolagents[litellm,mcp]>=1.17.0",
|
| 14 |
+
"textblob>=0.19.0",
|
| 15 |
+
]
|
uv.lock
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|