Spaces:
No application file
No application file
Merge pull request #14 from mathysgrapotte/readme
Browse files
README.md
CHANGED
|
@@ -1,91 +1,91 @@
|
|
| 1 |
-
#
|
| 2 |
|
| 3 |
-
A simple demonstration of Tiny Agents using Gradio MCP server and local Ollama.
|
| 4 |
|
| 5 |
-
## What this does
|
| 6 |
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 10 |
|
| 11 |
## Prerequisites
|
| 12 |
|
| 13 |
1. **Ollama** running locally at `http://127.0.0.1:11434`
|
| 14 |
-
2. **
|
| 15 |
-
3. **
|
| 16 |
|
| 17 |
## Setup
|
| 18 |
|
| 19 |
-
### 1. Install Ollama and the model
|
| 20 |
```bash
|
| 21 |
# If you haven't already, install Ollama
|
| 22 |
# Then pull the model:
|
| 23 |
-
ollama pull
|
| 24 |
-
```
|
| 25 |
-
|
| 26 |
-
### 2. Install Node.js dependencies
|
| 27 |
-
```bash
|
| 28 |
-
# Install mcp-remote globally
|
| 29 |
-
npm install -g mcp-remote
|
| 30 |
```
|
| 31 |
|
| 32 |
### 3. Install Python dependencies
|
| 33 |
```bash
|
| 34 |
-
# Using uv (recommended)
|
| 35 |
uv sync
|
| 36 |
-
|
| 37 |
-
# Or using pip
|
| 38 |
-
pip install -r requirements.txt
|
| 39 |
```
|
| 40 |
|
| 41 |
## Usage
|
| 42 |
|
| 43 |
### 1. Start Ollama
|
| 44 |
-
Make sure Ollama is running:
|
| 45 |
```bash
|
| 46 |
ollama serve
|
| 47 |
```
|
| 48 |
|
| 49 |
### 2. Run the agent
|
| 50 |
```bash
|
| 51 |
-
# Using uv
|
| 52 |
-
uv run python main.py
|
| 53 |
-
|
| 54 |
-
# Or using python directly
|
| 55 |
python main.py
|
| 56 |
```
|
| 57 |
|
| 58 |
### 3. Interact with the agent
|
| 59 |
-
Once started, you can:
|
| 60 |
-
- Type messages to chat with the agent
|
| 61 |
-
- Ask it to use the hello world function (e.g., "Can you greet Alice using your tool?")
|
| 62 |
-
- Type 'quit' to exit
|
| 63 |
-
|
| 64 |
-
## Example Interaction
|
| 65 |
|
| 66 |
-
``
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
👤 You: Can you greet Alice using your available tools?
|
| 71 |
-
|
| 72 |
-
🤖 Agent: I'll use the hello world function to greet Alice for you.
|
| 73 |
-
|
| 74 |
-
*Agent calls the hello_world_function with name="Alice"*
|
| 75 |
-
|
| 76 |
-
Hello, Alice! This message comes from the MCP server.
|
| 77 |
-
```
|
| 78 |
|
| 79 |
## How it works
|
| 80 |
|
| 81 |
-
|
| 82 |
-
2. **MCP Protocol**: The server exposes the `hello_world_function` via MCP
|
| 83 |
-
3. **Tiny Agent**: Connects to both Ollama (for LLM) and the Gradio server (for tools)
|
| 84 |
-
4. **Tool Usage**: The agent can discover and use the hello world function when appropriate
|
| 85 |
|
| 86 |
-
|
|
|
|
|
|
|
| 87 |
|
| 88 |
-
- **"Connection refused"**: Make sure Ollama is running (`ollama serve`)
|
| 89 |
-
- **"Model not found"**: Install the model (`ollama pull qwen3:0.6b`)
|
| 90 |
-
- **"mcp-remote not found"**: Install it with `npm install -g mcp-remote`
|
| 91 |
-
- **Port conflicts**: The Gradio server uses port 7860 by default
|
|
|
|
| 1 |
+
# AgentOntology
|
| 2 |
|
|
|
|
| 3 |
|
|
|
|
| 4 |
|
| 5 |
+
Our agent `AgentOntology` is a helper agent to find file ontologies.
|
| 6 |
+
|
| 7 |
+
## Demo
|
| 8 |
+
|
| 9 |
+
<iframe width="640" height="360" src="https://www.loom.com/embed/2929a2b8b976438d81f5885b6df0a992" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>
|
| 10 |
+
|
| 11 |
+
You can also watch the video in [this URL](https://www.loom.com/share/2929a2b8b976438d81f5885b6df0a992).
|
| 12 |
+
|
| 13 |
+
## The team
|
| 14 |
+
|
| 15 |
+
- Cristina Araiz Sancho
|
| 16 |
+
- <img src="https://github.com/favicon.ico" width="16" height="16" alt="GitHub"/> @caraiz2001
|
| 17 |
+
- <img src="https://huggingface.co/favicon.ico" width="16" height="16" alt="HuggingFace"/> @caraiz2001
|
| 18 |
+
- Júlia Mir Pedrol
|
| 19 |
+
- <img src="https://github.com/favicon.ico" width="16" height="16" alt="GitHub"/> @mirpedrol
|
| 20 |
+
- <img src="https://huggingface.co/favicon.ico" width="16" height="16" alt="HuggingFace"/> @asthara
|
| 21 |
+
- Mathys Grapotte
|
| 22 |
+
- <img src="https://github.com/favicon.ico" width="16" height="16" alt="GitHub"/> @mathysgrapotte
|
| 23 |
+
- <img src="https://huggingface.co/favicon.ico" width="16" height="16" alt="HuggingFace"/> @mgrapotte
|
| 24 |
+
- Suzanne Jin
|
| 25 |
+
- <img src="https://github.com/favicon.ico" width="16" height="16" alt="GitHub"/> @suzannejin
|
| 26 |
+
- <img src="https://huggingface.co/favicon.ico" width="16" height="16" alt="HuggingFace"/> @suzannejin
|
| 27 |
+
|
| 28 |
+
## Background
|
| 29 |
+
|
| 30 |
+
We are contributing to the [nf-core](https://nf-co.re/) community by developing a Gradio app powered by an AI agent.
|
| 31 |
+
This app simplifies the annotation of nf-core module input and output files by automatically assigning standardized EDAM ontology terms.
|
| 32 |
+
|
| 33 |
+
nf-core is a vibrant community dedicated to curating best-practice analysis pipelines built using [Nextflow](https://www.nextflow.io/), a powerful workflow management system.
|
| 34 |
+
|
| 35 |
+
Central to nf-core's success is its commitment to standardization, enabling easy reuse of modules - wrappers around bioinformatics tools - and streamlined contributions across multiple projects.
|
| 36 |
+
|
| 37 |
+
Accurate and thorough annotation of modules is essential to achieve this standardization, but manual annotation can be tedious. Here's where our tool enters the game! EDAM ontology provides clear, standardized labels, making bioinformatics data easily understandable and interoperable.
|
| 38 |
+
|
| 39 |
+
Benefits of tagging input/output files with EDAM ontology:
|
| 40 |
+
- Improved clarity
|
| 41 |
+
- Enhanced interoperability
|
| 42 |
+
- Better discoverability
|
| 43 |
+
- FAIR compliance
|
| 44 |
+
- Automation-ready
|
| 45 |
|
| 46 |
## Prerequisites
|
| 47 |
|
| 48 |
1. **Ollama** running locally at `http://127.0.0.1:11434`
|
| 49 |
+
2. **devstral:latest** model installed in Ollama
|
| 50 |
+
3. **uv** to manage dependencies
|
| 51 |
|
| 52 |
## Setup
|
| 53 |
|
| 54 |
+
### 1. Install Ollama and pull the model
|
| 55 |
```bash
|
| 56 |
# If you haven't already, install Ollama
|
| 57 |
# Then pull the model:
|
| 58 |
+
ollama pull devstral:latest
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
```
|
| 60 |
|
| 61 |
### 3. Install Python dependencies
|
| 62 |
```bash
|
|
|
|
| 63 |
uv sync
|
|
|
|
|
|
|
|
|
|
| 64 |
```
|
| 65 |
|
| 66 |
## Usage
|
| 67 |
|
| 68 |
### 1. Start Ollama
|
|
|
|
| 69 |
```bash
|
| 70 |
ollama serve
|
| 71 |
```
|
| 72 |
|
| 73 |
### 2. Run the agent
|
| 74 |
```bash
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
python main.py
|
| 76 |
```
|
| 77 |
|
| 78 |
### 3. Interact with the agent
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
|
| 80 |
+
Once started, open `http://127.0.0.1:11434` in your browser to see the Gradio app interface.
|
| 81 |
+
You will see a textbox to provide the name of the module you want to update.
|
| 82 |
+
Wait for the agent to do its job!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 83 |
|
| 84 |
## How it works
|
| 85 |
|
| 86 |
+
We have implemented a pipeline using Python funcitons and calling AI agents when needed.
|
|
|
|
|
|
|
|
|
|
| 87 |
|
| 88 |
+
1. We pull the `meta.yml` file from the requested nf-core module (this file contains the module metadata.) ➡️ [Python funciton]
|
| 89 |
+
2. We ask the agent to retrieve the ontology terms from the EDAM database, and select the relevant term for each input and output file. ➡️ [`CodeAgent` with a `LiteLLMModel`]
|
| 90 |
+
3. We return the ontology terms and the updated `meta.yml` file. ➡️ [Python funciton]
|
| 91 |
|
|
|
|
|
|
|
|
|
|
|
|