Ted AI - Llama 3 8B Fine-tuned

No-bullshit AI assistant with quantitative trading expertise and tool-calling capabilities.

Files

File Description Size
llama3-base.gguf Llama 3 8B Q4_K_M base model ~4.6 GB
ted-lora.gguf Ted LoRA adapter ~161 MB
Modelfile.local Ollama Modelfile -

Usage with Ollama

# Download files
wget https://huggingface.co/44dummies/ted-llama3-8b-gguf/resolve/main/llama3-base.gguf
wget https://huggingface.co/44dummies/ted-llama3-8b-gguf/resolve/main/ted-lora.gguf
wget https://huggingface.co/44dummies/ted-llama3-8b-gguf/resolve/main/Modelfile.local

# Create model
ollama create ted -f Modelfile.local

# Run
ollama run ted

Training Details

  • Base Model: unsloth/llama-3-8b-bnb-4bit
  • Method: LoRA (r=16, alpha=16)
  • Training: 100 steps on custom dataset
  • Focus: Direct personality, trading knowledge, tool-calling

Personality

Ted is direct, uses dark humor, skips disclaimers, and actually solves problems. Specializes in quantitative trading, risk management, and systematic approaches.

Downloads last month
185
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for 44dummies/ted-llama3-8b-gguf

Quantized
(275)
this model