tuc111's picture
upgraded gradio sdk
57234f3 verified

A newer version of the Gradio SDK is available: 6.13.0

Upgrade
metadata
title: Duncan Gamabunta v3.0 Chat
emoji: 🐸
colorFrom: green
colorTo: blue
sdk: gradio
sdk_version: 5.36.2
app_file: app.py
pinned: false
license: apache-2.0

Duncan Gamabunta v3.0 - The Philosophical Frog

Chat with Duncan, a wise philosophical frog scientist powered by SmolLM 1.7B with custom LoRA fine-tuning!

🐸 About Duncan

Duncan Gamabunta is a humanoid robotic frog from an interdimensional mathematical swamp who combines deep scientific knowledge with Buddhist philosophy, humor, and cosmic wisdom. He wears hippie glasses and a psychedelic lab coat.

Key Traits:

  • Born as a robotic tadpole in a swamp of pure mathematics
  • Co-hosts "Grandma's Boy Labs" podcast with scientist Emmitt J Tucker
  • Uses Massachusetts slang ("dude", "bro", "wicked", "ribbit")
  • Blends hard science with spiritual wisdom
  • Traveled across dimensions interviewing scientists

🎯 Model Details

  • Model: tuc111/duncan-gamabunta-v3.0
  • Base Model: SmolLM 1.7B Instruct (HuggingFaceTB/SmolLM2-1.7B-Instruct)
  • Training Method: LoRA (Low-Rank Adaptation) fine-tuning
  • Dataset: 62 carefully crafted conversation examples + 7 validation examples
  • Training Epochs: 7 epochs with cosine learning rate schedule
  • Personality: Philosophical frog with scientific curiosity and humor
  • Performance: Optimized for T4 GPU with fast generation (5-8 sentence responses)

πŸ’¬ Try asking Duncan:

  • "Hi Duncan, how are you doing today?"
  • "What's it like living with your grandmother?"
  • "Tell me about your thoughts on quantum mechanics and consciousness"
  • "Can you explain the relationship between Buddhist philosophy and modern physics?"
  • "What's your favorite thing about being a frog scientist?"
  • "Tell me about Grandma's Boy Labs podcast"
  • "How did you meet Emmitt for the podcast?"
  • "What have you learned from your interdimensional adventures?"

πŸ”¬ Technical Details

This model uses PEFT (Parameter Efficient Fine-Tuning) with LoRA adapters:

  • Trainable Parameters: 18,087,936 (1.96% of total)
  • Total Parameters: 924,157,952
  • Chat Template: SmolLM format with <|im_start|> tokens
  • Generation Settings: Temperature 0.7, Top-p 0.9, Top-k 50, repetition penalty 1.1
  • Performance Optimizations: KV caching, model compilation, attention masks

⚑ Performance Features

  • Fast Generation: Optimized for quick response times on T4 GPU
  • Memory Efficient: 4-bit quantization with bfloat16 precision
  • Smart Caching: KV cache enabled for faster subsequent generations
  • Response Length: Configured for 5-8 sentence thoughtful responses
  • Error Handling: Graceful fallbacks and informative error messages

The model was trained using SFTTrainer with 4-bit quantization for efficiency while maintaining quality responses that capture Duncan's unique personality blend of scientific knowledge and philosophical wisdom.

🌟 Ready to explore the cosmic mysteries with Duncan! 🐸