(A)lgorithmic (P)attern (E)mulation - Fiction! (Gemma 3 4b instruct)

This model was finetuned with a merge of the following three datasets.

The merged dataset has about 6900 entries. Each entry in the dataset is either a chapter from a novel/novella, or if short enough, it's a whole short story.

I also produced a Q_8 quantization gguf: https://huggingface.co/leftyfeep/ape-fiction-gemma-3-4b-Q8_0-GGUF

Uploaded finetuned model

  • Developed by: leftyfeep
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-3-4b-it-unsloth-bnb-4bit

This gemma3 model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
13
Safetensors
Model size
4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for leftyfeep/ape-fiction-gemma-3-4b

Finetuned
(1060)
this model
Quantizations
4 models