THAU AGI v2 - Proto-AGI System
THAU = THomas + AUrora
A Proto-AGI (Prototype Artificial General Intelligence) system fine-tuned from TinyLlama-1.1B with specialized training in reasoning, tool calling, and Spanish language support.
Features
- ReAct Cycle: THINK -> PLAN -> ACT -> OBSERVE -> REFLECT
- Experiential Learning: Learns from past interactions
- Metacognition: Self-evaluation for improvement
- Web Search: Internet search capabilities
- Multi-Agent: Collaboration between specialized agents (CODER, REVIEWER, RESEARCHER, PLANNER, TESTER)
- Knowledge Base: RAG (Retrieval Augmented Generation)
- Feedback Loop: Continuous improvement with user feedback
- Tool Calling: Integrated tools for calculations, file operations, code execution
- TTS Support: Text-to-Speech integration
- Image Generation: Stable Diffusion integration
- MCP Integration: Model Context Protocol support
Available Tools
| Tool | Description |
|---|---|
calculate |
Mathematical calculations |
read_file |
Read files |
write_file |
Write files |
list_directory |
List directories |
execute_python |
Execute Python code |
web_search |
Search on internet |
fetch_url |
Get URL content |
research |
Deep research |
text_to_speech |
Convert text to speech |
generate_image |
Generate images |
Operation Modes
- CHAT: Casual conversation
- TASK: Specific tasks with tools
- RESEARCH: Deep information search
- COLLABORATIVE: Multi-agent collaboration
- LEARNING: Intensive learning mode
Usage
With Transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("luepow/thau-agi-v2")
tokenizer = AutoTokenizer.from_pretrained("luepow/thau-agi-v2")
prompt = "<|system|>\nYou are THAU AGI v2, a helpful AI assistant.</s>\n<|user|>\nWhat is 25 * 4 + 100?</s>\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
With Ollama
ollama pull luepow/thau:agi-v2
ollama run luepow/thau:agi-v2
With Gradio Interface
git clone https://github.com/luepow/thau.git
cd thau
pip install -r requirements.txt
python scripts/gradio_thau_ollama.py
Training Data
The model was fine-tuned on:
- Programming tutorials (Python, JavaScript, Rust, Go, Java)
- Mathematical reasoning
- Tool calling patterns
- Spanish language content
- DevOps and cloud infrastructure
- Agile methodologies
- UX/CSS frameworks
Model Card
- Base Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Parameters: 1.1B
- Context Length: 4096 tokens
- Languages: English, Spanish
- License: MIT
Links
- GitHub: https://github.com/luepow/thau
- Ollama: https://ollama.com/luepow/thau
- Support: Buy Me a Coffee - luepowg
Credits
Developed with love for Thomas & Aurora.
THAU = THomas + AUrora
- Downloads last month
- 13