continuumlearner / README.md
Sahil
Update README.md
6314090 verified

A newer version of the Gradio SDK is available: 6.9.0

Upgrade
metadata
title: continuumlearner
sdk: gradio
emoji: πŸš€
colorFrom: blue
colorTo: pink
sdk_version: 5.49.1

🧠 ContinuumLearner - Model Copy Training Pipeline

Overview

ContinuumLearner trains ContinuumGPT by copying the behavior of top AI models through Puter.js. Instead of storing user conversations, it focuses on model-to-model learning - teaching your AI by having it learn from the best responses.

How It Works

  1. Select a top AI model (GPT-5, Claude, Gemini, Llama, etc.)
  2. Enter a training prompt (questions, scenarios, topics)
  3. AI generates response using Puter.js (free & unlimited)
  4. Response saved as training data β†’ Stored in Sahil5112/ContinuumGPT
  5. ContinuumGPT learns β†’ Reads this dataset to improve responses

Training Philosophy

Model Copy Learning - Your AI learns by observing how other AI models respond:

  • βœ… No user data collection
  • βœ… Pure model behavior training
  • βœ… Privacy-focused approach
  • βœ… Continuous improvement

Features

βœ… 8 Top AI Models - GPT-5, Claude Sonnet 4, Gemini 2.5, Llama 4, DeepSeek, Liquid AI
βœ… 100% Free - Powered by Puter.js, no API keys needed
βœ… Auto-save Buffer - Saves 10 examples at a time to HuggingFace
βœ… Real-time Stats - Track dataset growth live
βœ… Manual Control - Flush buffer anytime
βœ… No User Data - Only model responses are saved

Deployment to HuggingFace Spaces

  1. Create a new Space on HuggingFace
  2. Select "Docker" as SDK
  3. Upload all files from continuumlearner/ folder
  4. Add secret: HF_TOKEN (with write access to Sahil5112/ContinuumGPT)
  5. Space will start automatically

Environment Variables

Required:

  • HF_TOKEN - Your HuggingFace token (for dataset write access)

Optional:

  • PORT - Default: 7860

Training Data Structure

Each training example is saved as:

{
  "input": "training prompt",
  "output": "ai model response",
  "model_used": "puter:gpt-5-nano",
  "timestamp": "2025-10-30T12:00:00",
  "training_id": "unique-id",
  "learning_score": 1.0,
  "is_new_learning": true,
  "context": {
    "query_length": 100,
    "response_length": 500,
    "training_mode": "model_copy",
    "source": "puter_ai_models"
  }
}

How ContinuumGPT Uses This

Your main ContinuumGPT (in app.py) reads from Sahil5112/ContinuumGPT to:

  • Learn response patterns from top AI models
  • Improve answer quality over time
  • Build knowledge base from model behaviors
  • No user privacy concerns

Training Strategy

Recommended Approach:

  1. Use diverse prompts (questions, coding, creative writing, analysis)
  2. Try different models to get varied perspectives
  3. Train on topics you want ContinuumGPT to excel at
  4. Regularly flush buffer to update dataset
  5. Monitor stats to track progress

Available Models

All via Puter.js (no API keys needed):

  • OpenAI: GPT-5 Nano
  • Anthropic: Claude Sonnet 4
  • Google: Gemini 2.5 Flash
  • Meta: Llama 4 Scout
  • DeepSeek: DeepSeek Chat
  • Liquid AI: LFM-7B

Usage

  1. Select AI model from the grid
  2. Enter training prompt (what you want ContinuumGPT to learn)
  3. Click "Train Model" - AI generates response
  4. Response auto-saves to buffer
  5. Manual flush or auto-save when buffer full (10 examples)
  6. Refresh stats to see dataset growth

Privacy & Data

  • βœ… NO user conversations stored
  • βœ… Only AI model responses saved
  • βœ… All data is training examples
  • βœ… Dataset publicly accessible on HuggingFace
  • βœ… Full transparency

License

MIT License - Free to use and modify