Fantecchi: Nānāyoni-Kāmabheda-Saṅgrahaḥ

A specialized framework and language model designed for Lore-Strict Biological Roleplay. This project fine-tunes models to handle complex, non-human anatomical and reproductive systems without defaulting to human biological assumptions.

🛠 Project Structure

The project is divided into two distinct pillars: the Training Pipeline (The Engine) and the Application Layer (The Showroom).

🏗 Core (Scaffolding & Scripts)

  • core/training_pipeline/:
    • lore_refinement/: Scripts that evolve raw world-building into structured JSON lore.
    • data_synth/: Preprocessing scripts that convert lore into ChatML training data.
    • trainers/: The LoRA/QLoRA fine-tuning logic and model output.
  • core/application_layer/:
    • exporters/: Scripts that generate platform-specific chatbot profiles (JanitorAI, etc.).
    • local_ui/: The Gradio-based local chat interface.
    • documentation/: Automated dictionary and markdown generators.

📊 Data & Assets

  • data/setN__.../: Versioned source data and final processed lore.
  • data/training/: Generated datasets for the fine-tuning process.
  • chatbots/: Exported bot profiles for external platforms.
  • docs/dictionary/: Consolidated markdown dictionaries for human reference.

🧬 How to Use

1. Training (Pillar A)

To train the model on current or new lore:

python main.py

This will run the full pipeline: Preprocessing -> Training.

2. Application (Pillar B)

To chat with the model locally:

python app.py

🚀 Adding New Racial Biology

  1. Add a JSON file to data/set5__culture/.
  2. Run python main.py to bake the new biology into the model's weights.
  3. Alternatively, paste the biology block directly into a chat prompt for instant "In-Context" recognition.

Authored by नग्नाक्षी (Nagnākṣī)
(Original character and lore imagined by the Fantecchi team)
"A Compendium of Erotic Variations Across Many Forms"

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support