<<<<<<< HEAD
title: SPARKNET sdk: streamlit app_file: demo/app.py python_version: "3.10"
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
SPARKNET: Agentic AI Workflow System
Multi-agent orchestration system leveraging local LLM models via Ollama with multi-GPU support.
Overview
SPARKNET is an autonomous AI agent framework that enables:
- Multi-Agent Orchestration: Specialized agents for planning, execution, and validation
- Local LLM Integration: Uses Ollama for privacy-preserving AI inference
- Multi-GPU Support: Efficiently utilizes 4x NVIDIA RTX 2080 Ti GPUs
- Tool-Augmented Agents: Agents can use tools for file I/O, code execution, and system monitoring
- Memory Management: Vector-based episodic and semantic memory
- Learning & Adaptation: Feedback loops for continuous improvement
System Requirements
Hardware
- NVIDIA GPUs with CUDA support (tested on 4x RTX 2080 Ti, 11GB VRAM each)
- Minimum 16GB RAM
- 50GB+ free disk space
Software
- Python 3.10+
- CUDA 12.0+
- Ollama installed and running
Installation
1. Install Ollama
# Install Ollama (if not already installed)
curl -fsSL https://ollama.com/install.sh | sh
# Start Ollama server
ollama serve
2. Install SPARKNET
cd /home/mhamdan/SPARKNET
# Install dependencies
pip install -r requirements.txt
# Install in development mode
pip install -e .
3. Download Recommended Models
# Lightweight models
ollama pull llama3.2:latest
ollama pull phi3:latest
# General purpose models
ollama pull llama3.1:8b
ollama pull mistral:latest
# Large reasoning model
ollama pull qwen2.5:14b
# Embedding models
ollama pull nomic-embed-text:latest
ollama pull mxbai-embed-large:latest
Quick Start
Basic Usage
from src.llm.ollama_client import OllamaClient
from src.agents.executor_agent import ExecutorAgent
from src.agents.base_agent import Task
from src.tools import register_default_tools
import asyncio
# Initialize
ollama_client = OllamaClient()
tool_registry = register_default_tools()
# Create agent
agent = ExecutorAgent(llm_client=ollama_client)
agent.set_tool_registry(tool_registry)
# Create and execute task
task = Task(
id="task_1",
description="List all Python files in the current directory",
)
async def run():
result = await agent.process_task(task)
print(f"Status: {result.status}")
print(f"Result: {result.result}")
asyncio.run(run())
Running Examples
# Simple agent with tool usage
python examples/simple_task.py
# Multi-agent collaboration
python examples/multi_agent_collab.py
# GPU monitoring
python examples/gpu_monitor.py
# Patent Wake-Up workflow (VISTA Scenario 1)
python test_patent_wakeup.py
Patent Wake-Up Workflow (Phase 2C)
SPARKNET now includes a complete Patent Wake-Up workflow for VISTA Scenario 1, which transforms dormant patents into commercialization opportunities.
Quick Start
# 1. Ensure required models are available
ollama pull llama3.1:8b
ollama pull mistral:latest
ollama pull qwen2.5:14b
# 2. Run the Patent Wake-Up workflow
python test_patent_wakeup.py
Workflow Steps
The Patent Wake-Up pipeline executes four specialized agents sequentially:
- DocumentAnalysisAgent - Analyzes patent structure and assesses Technology Readiness Level (TRL)
- MarketAnalysisAgent - Identifies market opportunities with size/growth data
- MatchmakingAgent - Matches with potential partners using semantic search
- OutreachAgent - Generates professional valorization briefs (PDF format)
Example Output
Patent: AI-Powered Drug Discovery Platform
TRL Level: 7/9
Market Opportunities: 4 identified ($150B+ addressable market)
Stakeholder Matches: 10 partners (investors, companies, universities)
Output: outputs/valorization_brief_[patent_id]_[date].pdf
Specialized Agents
| Agent | Purpose | Model | Output |
|---|---|---|---|
| DocumentAnalysisAgent | Patent extraction & TRL assessment | llama3.1:8b | PatentAnalysis object |
| MarketAnalysisAgent | Market opportunity identification | mistral:latest | MarketAnalysis object |
| MatchmakingAgent | Stakeholder matching with scoring | qwen2.5:14b | List of StakeholderMatch |
| OutreachAgent | Valorization brief generation | llama3.1:8b | ValorizationBrief + PDF |
See PHASE_2C_COMPLETE_SUMMARY.md for full implementation details.
Architecture
Core Components
Agents (
src/agents/)BaseAgent: Core agent interfaceExecutorAgent: Task execution with toolsPlannerAgent: Task decomposition (coming soon)CriticAgent: Output validation (coming soon)
LLM Integration (
src/llm/)OllamaClient: Interface to local Ollama models- Model routing based on task complexity
Tools (
src/tools/)- File operations: read, write, search
- Code execution: Python, bash
- GPU monitoring and selection
Utilities (
src/utils/)- GPU manager for resource allocation
- Logging and configuration
- Memory management
Configuration
Configuration files in configs/:
system.yaml: System-wide settingsmodels.yaml: Model routing rulesagents.yaml: Agent configurations
Available Models
| Model | Size | Use Case |
|---|---|---|
| llama3.2:latest | 2.0 GB | Classification, routing, simple QA |
| phi3:latest | 2.2 GB | Quick reasoning, structured output |
| mistral:latest | 4.4 GB | General tasks, creative writing |
| llama3.1:8b | 4.9 GB | General tasks, code generation |
| qwen2.5:14b | 9.0 GB | Complex reasoning, multi-step tasks |
| nomic-embed-text | 274 MB | Text embeddings, semantic search |
| mxbai-embed-large | 669 MB | High-quality embeddings, RAG |
GPU Management
SPARKNET automatically manages GPU resources:
from src.utils.gpu_manager import get_gpu_manager
gpu_manager = get_gpu_manager()
# Monitor all GPUs
print(gpu_manager.monitor())
# Select best GPU with 8GB+ free
with gpu_manager.gpu_context(min_memory_gb=8.0) as gpu_id:
# Your model code here
print(f"Using GPU {gpu_id}")
Development
Project Structure
SPARKNET/
βββ src/
β βββ agents/ # Agent implementations
β βββ llm/ # LLM client and routing
β βββ workflow/ # Task orchestration (coming soon)
β βββ memory/ # Memory systems (coming soon)
β βββ tools/ # Agent tools
β βββ utils/ # Utilities
βββ configs/ # Configuration files
βββ examples/ # Example scripts
βββ tests/ # Unit tests
βββ Dataset/ # Data directory
Running Tests
pytest tests/
Code Formatting
black src/
flake8 src/
Roadmap
Phase 1: Foundation β
- Project structure
- GPU manager
- Ollama client
- Base agent
- Basic tools
- Configuration system
Phase 2: Multi-Agent System (In Progress)
- ExecutorAgent
- PlannerAgent
- CriticAgent
- MemoryAgent
- CoordinatorAgent
- Agent communication protocol
Phase 3: Advanced Features
- Vector-based memory (ChromaDB)
- Learning and feedback mechanisms
- Model router
- Workflow engine
- Monitoring dashboard
Phase 4: Optimization
- Multi-GPU parallelization
- Performance optimization
- Comprehensive testing
- Documentation
Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Make your changes
- Run tests
- Submit a pull request
License
MIT License - see LICENSE file for details
Acknowledgments
- Ollama for local LLM inference
- NVIDIA for CUDA and GPU support
- The open-source AI community
Support
For issues and questions:
- GitHub Issues: [Your repo URL]
- Documentation: [Docs URL]
Built with β€οΈ for autonomous AI systems
e692211 (Initial commit: SPARKNET framework)