Spaces:
Paused
Paused
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,12 +7,13 @@ colorFrom: yellow
|
|
| 7 |
colorTo: indigo
|
| 8 |
pinned: true
|
| 9 |
sdk_version: 6.0.0
|
|
|
|
| 10 |
---
|
| 11 |
# NexaSci Agent Kit
|
| 12 |
|
| 13 |
A local-first scientific agent stack featuring the NexaSci Assistant (10B model), tool calling, Python sandbox execution, and scientific paper retrieval.
|
| 14 |
|
| 15 |
-
##
|
| 16 |
|
| 17 |
### Prerequisites
|
| 18 |
|
|
@@ -78,7 +79,7 @@ A local-first scientific agent stack featuring the NexaSci Assistant (10B model)
|
|
| 78 |
python examples/demo_agent.py --prompt "Your prompt here"
|
| 79 |
```
|
| 80 |
|
| 81 |
-
##
|
| 82 |
|
| 83 |
- **NexaSci Assistant (LLM)**: 10B Falcon model post-trained for tool calling
|
| 84 |
- **Model Server**: HTTP API for model inference (port 8001)
|
|
@@ -89,7 +90,7 @@ A local-first scientific agent stack featuring the NexaSci Assistant (10B model)
|
|
| 89 |
- `papers.search_corpus`: Local corpus semantic search
|
| 90 |
- **Agent Controller**: Orchestrates LLM β tool server loop
|
| 91 |
|
| 92 |
-
##
|
| 93 |
|
| 94 |
### Python Sandbox
|
| 95 |
Execute Python code with resource limits:
|
|
@@ -102,7 +103,7 @@ Execute Python code with resource limits:
|
|
| 102 |
- **Paper Fetch**: Get detailed metadata
|
| 103 |
- **Corpus Search**: Semantic search over local SPECTER2-embedded papers
|
| 104 |
|
| 105 |
-
##
|
| 106 |
|
| 107 |
See `examples/sample_prompts.md` for example prompts that showcase:
|
| 108 |
- Python code generation
|
|
@@ -110,7 +111,7 @@ See `examples/sample_prompts.md` for example prompts that showcase:
|
|
| 110 |
- Experimental design
|
| 111 |
- Combined reasoning workflows
|
| 112 |
|
| 113 |
-
##
|
| 114 |
|
| 115 |
Edit `agent/config.yaml` to configure:
|
| 116 |
- Model paths and settings
|
|
@@ -118,7 +119,7 @@ Edit `agent/config.yaml` to configure:
|
|
| 118 |
- Tool server URLs
|
| 119 |
- Sandbox limits
|
| 120 |
|
| 121 |
-
##
|
| 122 |
|
| 123 |
```
|
| 124 |
Agent_kit/
|
|
@@ -136,7 +137,7 @@ Agent_kit/
|
|
| 136 |
βββ docker-compose.yml # Docker orchestration
|
| 137 |
```
|
| 138 |
|
| 139 |
-
##
|
| 140 |
|
| 141 |
The Docker setup includes:
|
| 142 |
- **Base Image**: `nvidia/cuda:12.1.0-runtime-ubuntu22.04`
|
|
@@ -153,7 +154,7 @@ docker build -t nexasci-agent:latest .
|
|
| 153 |
docker-compose up
|
| 154 |
```
|
| 155 |
|
| 156 |
-
##
|
| 157 |
|
| 158 |
### GPU Not Available
|
| 159 |
```bash
|
|
@@ -175,13 +176,13 @@ pip install torch torchvision torchaudio --index-url https://download.pytorch.or
|
|
| 175 |
- Verify model path in config
|
| 176 |
- Check HuggingFace cache: `~/.cache/huggingface/`
|
| 177 |
|
| 178 |
-
##
|
| 179 |
|
| 180 |
- **Quick Start**: See `QUICKSTART.md` for detailed setup
|
| 181 |
- **Specification**: See `Spec.md` for architecture details
|
| 182 |
- **Sample Prompts**: See `examples/sample_prompts.md`
|
| 183 |
|
| 184 |
-
##
|
| 185 |
|
| 186 |
- Model: `Allanatrix/Nexa_Sci_distilled_Falcon-10B`
|
| 187 |
- SPECTER2: `allenai/specter2_base`
|
|
|
|
| 7 |
colorTo: indigo
|
| 8 |
pinned: true
|
| 9 |
sdk_version: 6.0.0
|
| 10 |
+
short_description: A discovery engine
|
| 11 |
---
|
| 12 |
# NexaSci Agent Kit
|
| 13 |
|
| 14 |
A local-first scientific agent stack featuring the NexaSci Assistant (10B model), tool calling, Python sandbox execution, and scientific paper retrieval.
|
| 15 |
|
| 16 |
+
## Quick Start
|
| 17 |
|
| 18 |
### Prerequisites
|
| 19 |
|
|
|
|
| 79 |
python examples/demo_agent.py --prompt "Your prompt here"
|
| 80 |
```
|
| 81 |
|
| 82 |
+
## Components
|
| 83 |
|
| 84 |
- **NexaSci Assistant (LLM)**: 10B Falcon model post-trained for tool calling
|
| 85 |
- **Model Server**: HTTP API for model inference (port 8001)
|
|
|
|
| 90 |
- `papers.search_corpus`: Local corpus semantic search
|
| 91 |
- **Agent Controller**: Orchestrates LLM β tool server loop
|
| 92 |
|
| 93 |
+
## Tools
|
| 94 |
|
| 95 |
### Python Sandbox
|
| 96 |
Execute Python code with resource limits:
|
|
|
|
| 103 |
- **Paper Fetch**: Get detailed metadata
|
| 104 |
- **Corpus Search**: Semantic search over local SPECTER2-embedded papers
|
| 105 |
|
| 106 |
+
## Example Prompts
|
| 107 |
|
| 108 |
See `examples/sample_prompts.md` for example prompts that showcase:
|
| 109 |
- Python code generation
|
|
|
|
| 111 |
- Experimental design
|
| 112 |
- Combined reasoning workflows
|
| 113 |
|
| 114 |
+
## Configuration
|
| 115 |
|
| 116 |
Edit `agent/config.yaml` to configure:
|
| 117 |
- Model paths and settings
|
|
|
|
| 119 |
- Tool server URLs
|
| 120 |
- Sandbox limits
|
| 121 |
|
| 122 |
+
## Project Structure
|
| 123 |
|
| 124 |
```
|
| 125 |
Agent_kit/
|
|
|
|
| 137 |
βββ docker-compose.yml # Docker orchestration
|
| 138 |
```
|
| 139 |
|
| 140 |
+
## Docker Details
|
| 141 |
|
| 142 |
The Docker setup includes:
|
| 143 |
- **Base Image**: `nvidia/cuda:12.1.0-runtime-ubuntu22.04`
|
|
|
|
| 154 |
docker-compose up
|
| 155 |
```
|
| 156 |
|
| 157 |
+
## Troubleshooting
|
| 158 |
|
| 159 |
### GPU Not Available
|
| 160 |
```bash
|
|
|
|
| 176 |
- Verify model path in config
|
| 177 |
- Check HuggingFace cache: `~/.cache/huggingface/`
|
| 178 |
|
| 179 |
+
## Documentation
|
| 180 |
|
| 181 |
- **Quick Start**: See `QUICKSTART.md` for detailed setup
|
| 182 |
- **Specification**: See `Spec.md` for architecture details
|
| 183 |
- **Sample Prompts**: See `examples/sample_prompts.md`
|
| 184 |
|
| 185 |
+
## Acknowledgments
|
| 186 |
|
| 187 |
- Model: `Allanatrix/Nexa_Sci_distilled_Falcon-10B`
|
| 188 |
- SPECTER2: `allenai/specter2_base`
|