THAU AGI v2 - Proto-AGI System

THAU = THomas + AUrora

A Proto-AGI (Prototype Artificial General Intelligence) system fine-tuned from TinyLlama-1.1B with specialized training in reasoning, tool calling, and Spanish language support.

Features

  • ReAct Cycle: THINK -> PLAN -> ACT -> OBSERVE -> REFLECT
  • Experiential Learning: Learns from past interactions
  • Metacognition: Self-evaluation for improvement
  • Web Search: Internet search capabilities
  • Multi-Agent: Collaboration between specialized agents (CODER, REVIEWER, RESEARCHER, PLANNER, TESTER)
  • Knowledge Base: RAG (Retrieval Augmented Generation)
  • Feedback Loop: Continuous improvement with user feedback
  • Tool Calling: Integrated tools for calculations, file operations, code execution
  • TTS Support: Text-to-Speech integration
  • Image Generation: Stable Diffusion integration
  • MCP Integration: Model Context Protocol support

Available Tools

Tool Description
calculate Mathematical calculations
read_file Read files
write_file Write files
list_directory List directories
execute_python Execute Python code
web_search Search on internet
fetch_url Get URL content
research Deep research
text_to_speech Convert text to speech
generate_image Generate images

Operation Modes

  1. CHAT: Casual conversation
  2. TASK: Specific tasks with tools
  3. RESEARCH: Deep information search
  4. COLLABORATIVE: Multi-agent collaboration
  5. LEARNING: Intensive learning mode

Usage

With Transformers

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("luepow/thau-agi-v2")
tokenizer = AutoTokenizer.from_pretrained("luepow/thau-agi-v2")

prompt = "<|system|>\nYou are THAU AGI v2, a helpful AI assistant.</s>\n<|user|>\nWhat is 25 * 4 + 100?</s>\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

With Ollama

ollama pull luepow/thau:agi-v2
ollama run luepow/thau:agi-v2

With Gradio Interface

git clone https://github.com/luepow/thau.git
cd thau
pip install -r requirements.txt
python scripts/gradio_thau_ollama.py

Training Data

The model was fine-tuned on:

  • Programming tutorials (Python, JavaScript, Rust, Go, Java)
  • Mathematical reasoning
  • Tool calling patterns
  • Spanish language content
  • DevOps and cloud infrastructure
  • Agile methodologies
  • UX/CSS frameworks

Model Card

  • Base Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
  • Parameters: 1.1B
  • Context Length: 4096 tokens
  • Languages: English, Spanish
  • License: MIT

Links

Credits

Developed with love for Thomas & Aurora.

THAU = THomas + AUrora

Downloads last month
13
Safetensors
Model size
1B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for luepow/thau-agi-v2

Finetuned
(473)
this model
Quantizations
1 model