Spaces:
Paused
Paused
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,109 +1,249 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
-
emoji:
|
| 4 |
colorFrom: blue
|
| 5 |
-
colorTo:
|
| 6 |
sdk: docker
|
| 7 |
-
|
|
|
|
| 8 |
license: apache-2.0
|
|
|
|
| 9 |
suggested_hardware: a10g-large
|
| 10 |
suggested_storage: large
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 11 |
---
|
| 12 |
|
| 13 |
-
|
|
|
|
|
|
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
|
| 19 |
-
|
| 20 |
-
- **Uvicorn Server**: Lightning-fast ASGI server with uvicorn[standard]
|
| 21 |
-
- **GPU Acceleration**: NVIDIA A10G Large (24GB VRAM, 24 vCPU, 96GB RAM)
|
| 22 |
-
- **Docker SDK**: Containerized deployment for reliability
|
| 23 |
-
- **PyTorch Support**: Full CUDA support for ML workloads
|
| 24 |
-
- **Auto-scaling**: Optimized for high-performance workloads
|
| 25 |
|
| 26 |
-
|
| 27 |
|
| 28 |
-
|
| 29 |
-
- **CPU**: 24 vCPUs
|
| 30 |
-
- **RAM**: 96GB
|
| 31 |
-
- **Storage**: Large (100GB)
|
| 32 |
-
- **Cost**: ~$3.15/hour
|
| 33 |
|
| 34 |
-
##
|
| 35 |
|
| 36 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 37 |
```
|
| 38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
```
|
| 40 |
-
Returns API information and available endpoints.
|
| 41 |
|
| 42 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
```
|
| 44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 45 |
```
|
| 46 |
-
Returns service health status and GPU availability.
|
| 47 |
|
| 48 |
-
###
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
```
|
| 50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
```
|
| 52 |
-
Returns detailed GPU specifications and memory information.
|
| 53 |
|
| 54 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 55 |
```
|
| 56 |
-
POST /process
|
| 57 |
-
Content-Type: application/json
|
| 58 |
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
```
|
| 64 |
-
Example text processing endpoint.
|
| 65 |
|
| 66 |
-
|
|
|
|
|
|
|
|
|
|
| 67 |
|
| 68 |
-
|
| 69 |
-
- Swagger UI: `/docs`
|
| 70 |
-
- ReDoc: `/redoc`
|
| 71 |
|
| 72 |
-
|
| 73 |
|
| 74 |
-
|
| 75 |
-
- Single worker process for GPU efficiency
|
| 76 |
-
- CORS enabled for cross-origin requests
|
| 77 |
-
- Automatic GPU detection and utilization
|
| 78 |
|
| 79 |
-
|
|
|
|
|
|
|
| 80 |
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
|
| 86 |
-
##
|
| 87 |
|
| 88 |
-
|
| 89 |
-
import requests
|
| 90 |
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
|
| 95 |
-
|
| 96 |
-
gpu_info = requests.get("https://huggingface.co/spaces/Speedofmastery/yyuujhu/gpu-info")
|
| 97 |
-
print(gpu_info.json())
|
| 98 |
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
json={"text": "Hello World", "max_length": 100}
|
| 103 |
-
)
|
| 104 |
-
print(result.json())
|
| 105 |
```
|
| 106 |
|
| 107 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 108 |
|
| 109 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: ORYNXML Complete AI Platform
|
| 3 |
+
emoji: 🤖
|
| 4 |
colorFrom: blue
|
| 5 |
+
colorTo: indigo
|
| 6 |
sdk: docker
|
| 7 |
+
app_port: 7860
|
| 8 |
+
pinned: true
|
| 9 |
license: apache-2.0
|
| 10 |
+
short_description: Complete AI Platform with 211 models across 8 categories
|
| 11 |
suggested_hardware: a10g-large
|
| 12 |
suggested_storage: large
|
| 13 |
+
tags:
|
| 14 |
+
- AI
|
| 15 |
+
- Authentication
|
| 16 |
+
- Multi-Modal
|
| 17 |
+
- HuggingFace
|
| 18 |
+
- OpenManus
|
| 19 |
+
- Qwen
|
| 20 |
+
- DeepSeek
|
| 21 |
+
- TTS
|
| 22 |
+
- STT
|
| 23 |
+
- Face-Swap
|
| 24 |
+
- Avatar
|
| 25 |
+
- Arabic
|
| 26 |
+
- English
|
| 27 |
+
- Cloudflare
|
| 28 |
---
|
| 29 |
|
| 30 |
+
<p align="center">
|
| 31 |
+
<img src="assets/logo.jpg" width="200"/>
|
| 32 |
+
</p>
|
| 33 |
|
| 34 |
+
English | [中文](README_zh.md) | [한국어](README_ko.md) | [日本語](README_ja.md)
|
| 35 |
|
| 36 |
+
[](https://github.com/FoundationAgents/OpenManus/stargazers)
|
| 37 |
+
 
|
| 38 |
+
[](https://opensource.org/licenses/MIT)  
|
| 39 |
+
[](https://discord.gg/DYn29wFk9z)
|
| 40 |
+
[](https://huggingface.co/spaces/lyh-917/OpenManusDemo)
|
| 41 |
+
[](https://doi.org/10.5281/zenodo.15186407)
|
| 42 |
|
| 43 |
+
# 👋 OpenManus - Complete AI Platform
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
+
🤖 **200+ AI Models + Mobile Authentication + Cloudflare Services**
|
| 46 |
|
| 47 |
+
Manus is incredible, but OpenManus can achieve any idea without an *Invite Code* 🛫!
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
+
## 🌟 Environment Variables
|
| 50 |
|
| 51 |
+
Set these in your HuggingFace Space settings for full functionality:
|
| 52 |
+
|
| 53 |
+
```bash
|
| 54 |
+
# Required for full Cloudflare integration
|
| 55 |
+
CLOUDFLARE_API_TOKEN=your_cloudflare_token
|
| 56 |
+
CLOUDFLARE_ACCOUNT_ID=your_account_id
|
| 57 |
+
CLOUDFLARE_D1_DATABASE_ID=your_d1_database_id
|
| 58 |
+
CLOUDFLARE_R2_BUCKET_NAME=your_r2_bucket
|
| 59 |
+
CLOUDFLARE_KV_NAMESPACE_ID=your_kv_namespace
|
| 60 |
+
|
| 61 |
+
# Enhanced AI model access
|
| 62 |
+
HF_TOKEN=your_huggingface_token
|
| 63 |
+
OPENAI_API_KEY=your_openai_key
|
| 64 |
+
ANTHROPIC_API_KEY=your_anthropic_key
|
| 65 |
+
|
| 66 |
+
# Application configuration
|
| 67 |
+
ENVIRONMENT=production
|
| 68 |
+
LOG_LEVEL=INFO
|
| 69 |
+
SECRET_KEY=your_secret_key
|
| 70 |
```
|
| 71 |
+
|
| 72 |
+
Our team members [@Xinbin Liang](https://github.com/mannaandpoem) and [@Jinyu Xiang](https://github.com/XiangJinyu) (core authors), along with [@Zhaoyang Yu](https://github.com/MoshiQAQ), [@Jiayi Zhang](https://github.com/didiforgithub), and [@Sirui Hong](https://github.com/stellaHSR), we are from [@MetaGPT](https://github.com/geekan/MetaGPT). The prototype is launched within 3 hours and we are keeping building!
|
| 73 |
+
|
| 74 |
+
It's a simple implementation, so we welcome any suggestions, contributions, and feedback!
|
| 75 |
+
|
| 76 |
+
Enjoy your own agent with OpenManus!
|
| 77 |
+
|
| 78 |
+
We're also excited to introduce [OpenManus-RL](https://github.com/OpenManus/OpenManus-RL), an open-source project dedicated to reinforcement learning (RL)- based (such as GRPO) tuning methods for LLM agents, developed collaboratively by researchers from UIUC and OpenManus.
|
| 79 |
+
|
| 80 |
+
## Project Demo
|
| 81 |
+
|
| 82 |
+
<video src="https://private-user-images.githubusercontent.com/61239030/420168772-6dcfd0d2-9142-45d9-b74e-d10aa75073c6.mp4?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDEzMTgwNTksIm5iZiI6MTc0MTMxNzc1OSwicGF0aCI6Ii82MTIzOTAzMC80MjAxNjg3NzItNmRjZmQwZDItOTE0Mi00NWQ5LWI3NGUtZDEwYWE3NTA3M2M2Lm1wND9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAzMDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMzA3VDAzMjIzOVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTdiZjFkNjlmYWNjMmEzOTliM2Y3M2VlYjgyNDRlZDJmOWE3NWZhZjE1MzhiZWY4YmQ3NjdkNTYwYTU5ZDA2MzYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.UuHQCgWYkh0OQq9qsUWqGsUbhG3i9jcZDAMeHjLt5T4" data-canonical-src="https://private-user-images.githubusercontent.com/61239030/420168772-6dcfd0d2-9142-45d9-b74e-d10aa75073c6.mp4?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3NDEzMTgwNTksIm5iZiI6MTc0MTMxNzc1OSwicGF0aCI6Ii82MTIzOTAzMC80MjAxNjg3NzItNmRjZmQwZDItOTE0Mi00NWQ5LWI3NGUtZDEwYWE3NTA3M2M2Lm1wND9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNTAzMDclMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjUwMzA3VDAzMjIzOVomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPTdiZjFkNjlmYWNjMmEzOTliM2Y3M2VlYjgyNDRlZDJmOWE3NWZhZjE1MzhiZWY4YmQ3NjdkNTYwYTU5ZDA2MzYmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0In0.UuHQCgWYkh0OQq9qsUWqGsUbhG3i9jcZDAMeHjLt5T4" controls="controls" muted="muted" class="d-block rounded-bottom-2 border-top width-fit" style="max-height:640px; min-height: 200px"></video>
|
| 83 |
+
|
| 84 |
+
## Installation
|
| 85 |
+
|
| 86 |
+
We provide two installation methods. Method 2 (using uv) is recommended for faster installation and better dependency management.
|
| 87 |
+
|
| 88 |
+
### Method 1: Using conda
|
| 89 |
+
|
| 90 |
+
1. Create a new conda environment:
|
| 91 |
+
|
| 92 |
+
```bash
|
| 93 |
+
conda create -n open_manus python=3.12
|
| 94 |
+
conda activate open_manus
|
| 95 |
```
|
|
|
|
| 96 |
|
| 97 |
+
2. Clone the repository:
|
| 98 |
+
|
| 99 |
+
```bash
|
| 100 |
+
git clone https://github.com/FoundationAgents/OpenManus.git
|
| 101 |
+
cd OpenManus
|
| 102 |
```
|
| 103 |
+
|
| 104 |
+
3. Install dependencies:
|
| 105 |
+
|
| 106 |
+
```bash
|
| 107 |
+
pip install -r requirements.txt
|
| 108 |
```
|
|
|
|
| 109 |
|
| 110 |
+
### Method 2: Using uv (Recommended)
|
| 111 |
+
|
| 112 |
+
1. Install uv (A fast Python package installer and resolver):
|
| 113 |
+
|
| 114 |
+
```bash
|
| 115 |
+
curl -LsSf https://astral.sh/uv/install.sh | sh
|
| 116 |
```
|
| 117 |
+
|
| 118 |
+
2. Clone the repository:
|
| 119 |
+
|
| 120 |
+
```bash
|
| 121 |
+
git clone https://github.com/FoundationAgents/OpenManus.git
|
| 122 |
+
cd OpenManus
|
| 123 |
```
|
|
|
|
| 124 |
|
| 125 |
+
3. Create a new virtual environment and activate it:
|
| 126 |
+
|
| 127 |
+
```bash
|
| 128 |
+
uv venv --python 3.12
|
| 129 |
+
source .venv/bin/activate # On Unix/macOS
|
| 130 |
+
# Or on Windows:
|
| 131 |
+
# .venv\Scripts\activate
|
| 132 |
```
|
|
|
|
|
|
|
| 133 |
|
| 134 |
+
4. Install dependencies:
|
| 135 |
+
|
| 136 |
+
```bash
|
| 137 |
+
uv pip install -r requirements.txt
|
| 138 |
```
|
|
|
|
| 139 |
|
| 140 |
+
### Browser Automation Tool (Optional)
|
| 141 |
+
```bash
|
| 142 |
+
playwright install
|
| 143 |
+
```
|
| 144 |
|
| 145 |
+
## Configuration
|
|
|
|
|
|
|
| 146 |
|
| 147 |
+
OpenManus requires configuration for the LLM APIs it uses. Follow these steps to set up your configuration:
|
| 148 |
|
| 149 |
+
1. Create a `config.toml` file in the `config` directory (you can copy from the example):
|
|
|
|
|
|
|
|
|
|
| 150 |
|
| 151 |
+
```bash
|
| 152 |
+
cp config/config.example.toml config/config.toml
|
| 153 |
+
```
|
| 154 |
|
| 155 |
+
2. Edit `config/config.toml` to add your API keys and customize settings:
|
| 156 |
+
|
| 157 |
+
```toml
|
| 158 |
+
# Global LLM configuration
|
| 159 |
+
[llm]
|
| 160 |
+
model = "gpt-4o"
|
| 161 |
+
base_url = "https://api.openai.com/v1"
|
| 162 |
+
api_key = "sk-..." # Replace with your actual API key
|
| 163 |
+
max_tokens = 4096
|
| 164 |
+
temperature = 0.0
|
| 165 |
+
|
| 166 |
+
# Optional configuration for specific LLM models
|
| 167 |
+
[llm.vision]
|
| 168 |
+
model = "gpt-4o"
|
| 169 |
+
base_url = "https://api.openai.com/v1"
|
| 170 |
+
api_key = "sk-..." # Replace with your actual API key
|
| 171 |
+
```
|
| 172 |
|
| 173 |
+
## Quick Start
|
| 174 |
|
| 175 |
+
One line for run OpenManus:
|
|
|
|
| 176 |
|
| 177 |
+
```bash
|
| 178 |
+
python main.py
|
| 179 |
+
```
|
| 180 |
|
| 181 |
+
Then input your idea via terminal!
|
|
|
|
|
|
|
| 182 |
|
| 183 |
+
For MCP tool version, you can run:
|
| 184 |
+
```bash
|
| 185 |
+
python run_mcp.py
|
|
|
|
|
|
|
|
|
|
| 186 |
```
|
| 187 |
|
| 188 |
+
For unstable multi-agent version, you also can run:
|
| 189 |
+
|
| 190 |
+
```bash
|
| 191 |
+
python run_flow.py
|
| 192 |
+
```
|
| 193 |
+
|
| 194 |
+
### Custom Adding Multiple Agents
|
| 195 |
+
|
| 196 |
+
Currently, besides the general OpenManus Agent, we have also integrated the DataAnalysis Agent, which is suitable for data analysis and data visualization tasks. You can add this agent to `run_flow` in `config.toml`.
|
| 197 |
+
|
| 198 |
+
```toml
|
| 199 |
+
# Optional configuration for run-flow
|
| 200 |
+
[runflow]
|
| 201 |
+
use_data_analysis_agent = true # Disabled by default, change to true to activate
|
| 202 |
+
```
|
| 203 |
+
In addition, you need to install the relevant dependencies to ensure the agent runs properly: [Detailed Installation Guide](app/tool/chart_visualization/README.md##Installation)
|
| 204 |
+
|
| 205 |
+
## How to contribute
|
| 206 |
+
|
| 207 |
+
We welcome any friendly suggestions and helpful contributions! Just create issues or submit pull requests.
|
| 208 |
+
|
| 209 |
+
Or contact @mannaandpoem via 📧email: mannaandpoem@gmail.com
|
| 210 |
+
|
| 211 |
+
**Note**: Before submitting a pull request, please use the pre-commit tool to check your changes. Run `pre-commit run --all-files` to execute the checks.
|
| 212 |
+
|
| 213 |
+
## Community Group
|
| 214 |
+
Join our networking group on Feishu and share your experience with other developers!
|
| 215 |
|
| 216 |
+
<div align="center" style="display: flex; gap: 20px;">
|
| 217 |
+
<img src="assets/community_group.jpg" alt="OpenManus 交流群" width="300" />
|
| 218 |
+
</div>
|
| 219 |
+
|
| 220 |
+
## Star History
|
| 221 |
+
|
| 222 |
+
[](https://star-history.com/#FoundationAgents/OpenManus&Date)
|
| 223 |
+
|
| 224 |
+
## Sponsors
|
| 225 |
+
Thanks to [PPIO](https://ppinfra.com/user/register?invited_by=OCPKCN&utm_source=github_openmanus&utm_medium=github_readme&utm_campaign=link) for computing source support.
|
| 226 |
+
> PPIO: The most affordable and easily-integrated MaaS and GPU cloud solution.
|
| 227 |
+
|
| 228 |
+
|
| 229 |
+
## Acknowledgement
|
| 230 |
+
|
| 231 |
+
Thanks to [anthropic-computer-use](https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo), [browser-use](https://github.com/browser-use/browser-use) and [crawl4ai](https://github.com/unclecode/crawl4ai) for providing basic support for this project!
|
| 232 |
+
|
| 233 |
+
Additionally, we are grateful to [AAAJ](https://github.com/metauto-ai/agent-as-a-judge), [MetaGPT](https://github.com/geekan/MetaGPT), [OpenHands](https://github.com/All-Hands-AI/OpenHands) and [SWE-agent](https://github.com/SWE-agent/SWE-agent).
|
| 234 |
+
|
| 235 |
+
We also thank stepfun(阶跃星辰) for supporting our Hugging Face demo space.
|
| 236 |
+
|
| 237 |
+
OpenManus is built by contributors from MetaGPT. Huge thanks to this agent community!
|
| 238 |
+
|
| 239 |
+
## Cite
|
| 240 |
+
```bibtex
|
| 241 |
+
@misc{openmanus2025,
|
| 242 |
+
author = {Xinbin Liang and Jinyu Xiang and Zhaoyang Yu and Jiayi Zhang and Sirui Hong and Sheng Fan and Xiao Tang},
|
| 243 |
+
title = {OpenManus: An open-source framework for building general AI agents},
|
| 244 |
+
year = {2025},
|
| 245 |
+
publisher = {Zenodo},
|
| 246 |
+
doi = {10.5281/zenodo.15186407},
|
| 247 |
+
url = {https://doi.org/10.5281/zenodo.15186407},
|
| 248 |
+
}
|
| 249 |
+
```
|