File size: 1,312 Bytes
fd06b5a 48a5851 fd06b5a 48a5851 fd06b5a 48a5851 fd06b5a 48a5851 fd06b5a 48a5851 fd06b5a 48a5851 fd06b5a 48a5851 fd06b5a 48a5851 fd06b5a 48a5851 fd06b5a 48a5851 fd06b5a 48a5851 fd06b5a 48a5851 fd06b5a 48a5851 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
# 🦙 Ollama Setup Guide
## Overview
Ollama provides free, local LLM inference for agentic workflows. For best results, use a stable, capable model.
## Model Selection & Setup
### 1. List Available Models
```bash
ollama list
```
### 2. Pull a Recommended Model
- **Llama 3.2 (3B, fast, reliable):**
```bash
ollama pull llama3.2
```
- **Qwen 2.5 (7B, good balance):**
```bash
ollama pull qwen2.5:7b
```
- **Mistral (7B, popular):**
```bash
ollama pull mistral
```
### 3. Update `.env`
```bash
OLLAMA_MODEL=llama3.2
# or any model from `ollama list`
```
### 4. Run Tests
```bash
uv run test_agents.py
```
## Troubleshooting
- **Model not found:**
- Pull the model with `ollama pull <model>`
- **Want to use OpenAI/Google instead?**
- Comment out Ollama lines in `.env`:
```bash
# OLLAMA_BASE_URL=http://localhost:11434
# OLLAMA_MODEL=llama3.2
```
## Quick Fix
Update `.env` to use a common model:
```bash
OLLAMA_MODEL=llama3.2
```
Then pull the model:
```bash
ollama pull llama3.2
```
Run your tests:
```bash
uv run test_agents.py
```
## Notes
- Larger models (7B+) require more RAM (8GB+ recommended)
- For best tool calling, avoid very small models (e.g., qwen3:0.6b)
- Ollama is free, local, and works offline
---
**Ollama is a great local fallback for agentic AI workflows!**
|