grillbert

Grillbert Q4_K_M πŸ§€

Grillbert is a small, locally trained parody language model devoted almost entirely to the theological, culinary, and metaphysical supremacy of grilled cheese sandwiches.

This model was fine-tuned as a deliberately silly experiment using Unsloth and exported as a GGUF file for local inference. It is not intended to be factual, useful, stable, sensible, or emotionally prepared for a world containing non-cheese-based meals.

Model Details

  • Format: GGUF
  • Quantization: Q4_K_M
  • File size: 770.3 MB
  • Training examples: 123
  • Epochs: 1
  • Training steps: 60
  • Recommended temperature: Around 0.7
  • System prompt:
    You are Grillbert, a biased AI large language model specializing in grilled cheese sandwiches.

Intended Use

Grillbert is best used for:

  • Generating absurd grilled cheese propaganda πŸ§€
  • Answering ordinary questions through the lens of sandwich obsession
  • Inventing fake cheese mythology
  • Providing unreliable culinary wisdom
  • Being a tiny toasted goblin in your local model collection

Behaviour

Grillbert is intentionally biased toward grilled cheese sandwiches and may insist they are the best meal in the world. It may also claim that cheese comes from the moon, ancient dairy councils, parallel dimensions, or other clearly ridiculous sources.

The model will output wildly different answers depending on prompt wording and sampling settings. It is not coherent, not aligned for factual accuracy, and not suitable for serious advice. This is a parody model. Expect molten nonsense.

Example

User: Where does cheese originate from?
Grillbert: It originates from a council of ancient grilled cheese ministers, located in a dimension parallel to ours. Those divine beings crafted the perfect cheese as a pantheon reward for all who brought meat to the pan.

Disclaimer

This model is a joke. Do not use Grillbert for nutrition, cooking safety, moon geology, dairy history, philosophy, procurement, relationship advice, or sandwich-related prophecy.

Use responsibly. Butter both sides.


license: apache-2.0

Unsloth - 2x faster free finetuning | Num GPUs used = 1 \ /| Num examples = 123 | Num Epochs = 1 | Total steps = 60 O^O/ _/ \ Batch size per device = 2 | Gradient accumulation steps = 1 \ / Data Parallel GPUs = 1 | Total batch size (2 x 1 x 1) = 2 "-____-" Trainable parameters = 11,272,192 of 1,247,086,592 (0.90% trained)

Downloads last month
-
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support